Insights & Opinions

Agentic AI and the Illusion of Alignment in Banking

Mon, 30 Mar 2026

assets/site/Rik-Coeckelbergs-400x400.jpg
Rik Coeckelbergs Founder and CEO The Banking Scene

Agentic AI and the Illusion of Alignment in Banking featured

On March 17, we hosted an Agentic AI Quiz Show. What started as a familiar format quickly exposed something less familiar: we are not aligned on what we are actually building. It felt a bit like Back to the Future, the feel of Fintech Uncut, our world-famous Fintech Comedy Quiz Show from a few years back, but tweaked into an interactive session on the future instead of the news of the week.

I was the quiz master, and the quizzers were Chris Skinner, Andrew Vorster, and a variety of industry quizzers.

Once people began explaining their answers, it was evident that there was a misalignment. We disagreed on what AI truly is, how quickly it will develop, and what implications it may have for banks. If we can’t properly capture and align on AI, Agentic gets even more complicated. More than any specific insight, that realisation is what resonated most with me.

People spoke openly, challenged assumptions, and explored ideas without the need to position themselves. They did so because they felt in a safe space and that is why the recordings were not shared.

What follows is not a recap of some of the key questions, but what became visible in the discussion around them.

We Use the Same Words, But Mean Different Things

One of the more striking elements of the session was how quickly the conversation moved from answers to definitions. Not explicitly, but implicitly, through the way people argued their position.

Some approached Agentic AI as the next step in automation, an evolution of what we have been doing for years: improving processes, reducing manual work, increasing efficiency. Others framed it as fundamentally different, where systems no longer support decisions but instead take them within a given context.

I hope Peter Van Hees’ session during The Banking Scene Conference 2026 Amsterdam, or his book, brought clarity to avoid future discussions on Agentic from pivoting from technology discussions into ontological ones.

This issue is more significant than it appears. When two organisations assert they are investing in Agentic AI but have different interpretations, they aren't simply on separate paths; they are addressing different problems. One focuses on optimising existing systems, while the other begins to reconsider decision-making processes.

We've observed this pattern before: beneath the surface, interpretations differ, priorities diverge, and strategies follow separate paths. What appears as consensus is, in reality, a lack of it.

The Shift Is Not Visible at First Sight

There is a tendency to describe this as a shift from human to machine. That framing simplifies the discussion, but it also misses what is actually changing.

What used to be part of someone’s job is increasingly executed by a combination of models, workflows, and integrations. The idea of “digital colleagues” is sometimes presented as a metaphor, but it is becoming an operational reality. Systems are taking on responsibilities that were once clearly assigned to individuals.

People will not disappear, but their positions shift, just not always in a linear or predictable way. What remains is not simply “higher value work,” as it is often described, but work that is less structured, less repeatable, and more dependent on judgment.

At the same time, the way individuals interact with this complexity will likely become simpler. Instead of dealing with multiple systems, the expectation is that a single layer will orchestrate everything in the background. One interface, multiple underlying agents, increasing abstraction.

The Nature of Work Changes Before the Number of Jobs

This shift has consequences for how people experience their work. Employees remain accountable for outcomes, but their involvement becomes more indirect.

They oversee and interpret rather than directly execute, creating distance between action and outcome. Instead of execution, people will be more involved in interpretation, intervention, and accountability for outcomes that are increasingly produced elsewhere. The job risks being mentally more demanding.

It also alters how expertise develops: with fewer end-to-end tasks, understanding becomes fragmented, shifting from doing to observing, which can weaken good judgment.

Consequently, the focus shifts from system behaviour to what it means for people to operate within these systems, as responsibility is redistributed and roles are redefined in ways still unclear.

The discussion on jobs followed a familiar line. There was recognition that Agentic AI will have an impact, possibly larger than previous waves of automation. There was also the usual nuance that jobs will change before they disappear.

As routine tasks become automated, what remains are the more complex cases, situations where the system is uncertain, or decisions require judgment. Over time, this transforms roles into something different. Jobs that once included both simple and complex tasks now mainly involve complex ones.

This influences how people experience their work, the sustainability of their work, and how organisations approach productivity. Simultaneously, the cost structure changes: lower-cost roles are automated first, while new, more specialised, and costlier roles emerge. Designing, maintaining, and managing these systems is also challenging. Ultimately, this is not just a straightforward reduction but a reallocation of tasks and responsibilities.

Organisations Will Adapt, But Not Through Reinvention

At the organisational level, this shift creates a different type of challenge.

The idea of fully automated organisations surfaced briefly, but did not resonate as a realistic near-term outcome. It is a concept that has been discussed for years, often more as a thought experiment than as an operational model. What feels more plausible is not replacement, but gradual reconfiguration.

Work is no longer contained within clear functional boundaries, but increasingly flows across systems that cut through teams, departments, and even institutions. Tasks are not reassigned from one role to another; they are absorbed into processes that operate independently of traditional structures.

This has implications for governance. When decisions emerge from interconnected systems, it becomes harder to trace their origins and determine where to intervene when something goes wrong. Responsibility becomes less visible and more distributed.

In that context, control shifts away from hierarchy, emphasising system interactions over reporting lines. The organisation remains, but its function changes: it operates less as a role-based structure and more as a network of dependencies. This transition is unlikely to occur all at once; it will develop gradually and unevenly, often without being acknowledged as a transformation.

Over time, it will modify organisational operations, not by replacing them, but by changing the manner of control within them.

Risk Moves from Individual Systems to the System as a Whole

When the discussion shifted to risk, the focus quickly moved away from familiar topics. Although fraud, security, and model accuracy were mentioned, they did not dominate the conversation. Instead, concern arose about unintended consequences; not just isolated errors, but how systems behave when they interact.

Agentic setups do not operate in isolation; they respond to the same inputs, influence each other, and can escalate small issues rapidly due to their speed. The challenge is not only whether a single model is correct, but whether the entire system operates as intended. This presents a different kind of problem, one that current governance frameworks are not fully equipped to handle.

These frameworks typically assume systems support human decisions, making them less suitable for environments where systems make decisions and interact directly. The question has shifted from "does it work" to "does it behave in a controlled manner once deployed."

A few days after the quiz show press was very vocal on a few case that explain exactly that: “Meta AI agent’s instruction causes large sensitive data leak to employees” and “McKinsey rushes to fix AI system after hacker exposes flaws”, demonstrating that even the most reputable players in this field can fall victim to these risks.

The Question Is Not Who Builds It, But Where It Sits

One of the questions we put to the audience was: will banks build the most powerful AI agents, or will that role be taken by others? The room was divided. Roughly 60% did not believe banks would lead, while 40% still saw that as a realistic outcome.

Those who believe banks will build their own agents tend to frame it as a question of control. Owning the technology is seen as a way to retain ownership over data, processes, and ultimately the customer relationship. Handing that layer over too easily would reduce banks to infrastructure, something few are willing to accept, at least in principle.

At the same time, the opposing view was not based on a lack of ambition but on a more pragmatic reading of the current landscape. The argument was not that banks cannot build, but that others are already ahead where it matters most. Not necessarily in the balance sheet or regulation, but in user experience, distribution, and the ability to position themselves at the interface.

Whether banks build the agents or not may turn out to be secondary. What matters is where those agents sit. If the interaction with the customer is mediated by a layer outside the bank, then control shifts, regardless of who developed the underlying capability. It is one thing to lose control over the interface. It is another to lose influence over how decisions are initiated and executed.

The division in the room reflects that tension. Not between belief and disbelief, but between two different readings of where control will realistically reside. And that is probably where the real question lies: not who builds the agents, but who shapes the interaction they are part of.

The Customer Side Is Still Underestimated

When asked when customers would trust AI agents more than human advisors for everyday financial decisions, most answers clustered around 2030 and beyond. Not immediate, but not particularly far away either.

The discussion that followed made it clear that this is not a simple timeline question. For everyday interactions, the shift is already happening. Payments, budgeting, basic financial decisions: many customers already rely on systems without questioning them.

Trust, in that context, is not something that needs to be earned again. It is already embedded in behaviour.

At the same time, the conversation highlighted that this trust does not extend uniformly. For more complex or emotionally loaded decisions, the need for human interaction remains, according to the bankers in the audience, not because systems cannot provide an answer, but because reassurance, interpretation, and context still matter.

The discussion became more interesting when it moved beyond trust and into behaviour. There were several references to the idea that customers will increasingly rely on their own agents, either directly or through platforms that act on their behalf. Some projections mentioned during the session suggest this shift could happen sooner than expected, with a growing share of interactions initiated or even completed by systems rather than by individuals.

If that happens, the way banks design their interactions becomes less relevant. Banks today still optimise for human behaviour. Journeys are designed to guide attention, interfaces are built to influence decisions, and communication is structured to create engagement. An agent does not engage in that way. It does not navigate a journey or respond to messaging. It evaluates, compares, and executes.

That changes the dynamics of interaction in a more fundamental way than the discussion initially suggested. People may trust people more, but if convenience through agents increases with the right level of reliability, people will adapt, and banks will talk to systems more than to people.

It puts the whole shift of digital transformation in a different perspective, as for years, banks demanded that people shift to digital, and now consumers may be taking the lead, demanding that banks talk to this digital twin.

The Real Question Is About Relevance

If there is one key takeaway from the session, it is not that the industry lacks awareness but that it lacks alignment. There is no shared perspective on timing, no clear consensus on impact, and no agreement on where control will ultimately reside. Meanwhile, the technology is already being implemented, and job reductions are announced.

That combination shifts the focus from a technology discussion to a strategic one. The question is not whether Agentic AI will become part of banking, but what role the bank aims to play in an environment influenced by it. This is not purely theoretical but involves choices that will shape positioning over time.

The direction is already clear. Systems are becoming more autonomous, interactions are becoming more abstract, and decision-making is becoming more decentralised. The question is not whether banks will participate in this system, but whether they will continue to influence it or simply operate within a framework defined by others.


Join us in Brussels on May 28 for our flagship event and Belgium's Biggest Banking Conference, to get involved in the discussions on Agentic AI and much more as we continue to explore our theme of "Rethinking Relevance".

The Banking Scene: Director's Cut

After a couple of weeks' break due to hectic schedules and lots of travel, Rik and Andrew are back in this week's episode to add more insights and context to the article above. As always, you can view the discussion below or follow on your favourite podcast channel here.

Share this via
© Copyright 2026 The Banking Scene - All rights Reserved.