Insights & Opinions

From Fragmentation to an AI-Native Future: How AI Is Redefining Banking’s Growth Engine

Wed, 07 Jan 2026

assets/site/Andrew-Vorster-sq.jpg
Andrew Vorster Head of Growth The Banking Scene

From fragmetation to ai future banking featured

AI has crashed into banking with all the subtlety of a tidal wave. Every board is talking about it, every line of business has a pilot, and every vendor pitch now comes with “AI-powered” somewhere on the slide.

But if you look under the hood of most institutions, the reality is far less glamorous: fragmented systems, siloed teams and experimental AI use cases that never quite make it into production

I recently caught up with Backbase’s Vice President of Technology, Deepak Pandey, who cut through the noise and laid out what it actually takes to turn AI into a real growth engine, not just another bolt-on to old architectures.

We covered a LOT of ground, which I attempt to summarise here, but thankfully you can watch or listen to the full interview on the links below to get it all from Deepak directly!

From AI experiments to an “AI arms race”

Most banks are not short of AI initiatives. If anything, they have too many.

Deepak described what he’s seeing as a kind of AI arms race inside the enterprise. Every business owner, from onboarding to account servicing to collections, wants to modernise their domain with AI. The intent is good, but the execution is fragmented: multiple teams running pilots in isolation, with overlapping tools and duplicated effort and no single owner for “how AI is done” across the bank.

The first consequence is inefficiency. The second is risk. When nobody can point to who owns the AI platform or the standards, you end up with what Deepak calls agent sprawl: lots of agents, no coherent way to govern or scale them.

His closing recommendation was simple but powerful:

Identify who in the bank is responsible for “platformifying” AI and give them the authority to enable everyone else, with a central yet democratised approach.

In other words: stop thinking in terms of individual AI projects and start thinking in terms of an AI-native platform.

Why AI can’t just be “sprinkled” on top

Traditional digital transformation was hard, but at least it was fairly deterministic. You modernised channels, wrapped the core in APIs, moved workloads to the cloud, introduced microservices and observability and you more or less knew what you were getting.

AI is different.

You’re no longer just serving deterministic business logic; you’re orchestrating non-deterministic models and agents that can behave in unexpected ways. This is where Deepak draws a line between:

  • Human execution with AI assistance (what most banks do today – chatbots, copilots, embedded models here and there), and
  • Agent execution with human supervision (where agents can take meaningful action, while humans set direction and provide oversight).

You don’t get to that second state by adding a “smart” feature to every channel. You get there by putting AI at the core, supported by a proper platform and landing zone for AI innovation.

The AI landing zone: platform thinking for a new era

Deepak’s key idea is that banks now need an AI landing zone, the same way they once needed a cloud landing zone when moving into the cloud-native world.

This AI landing zone should bring together the building blocks needed to safely and repeatedly create and run AI agents:

  • Model serving – how and where models are deployed and accessed
  • Agent frameworks – how agents are created, orchestrated and extended
  • Governance and security – guardrails, access controls, and traceability
  • Evaluation and observability – monitoring agent behaviour, quality and drift

Crucially, according to Deepak, this is not just one tool. It’s typically 10 or more tools, from hyperscalers to open source to specialist vendors, all composed into a coherent platform that everyone in the bank can use.

Backbase’s own approach is to run this within their Grand Central integration platform and AI agent platform, but the principle applies regardless of vendor:

Think landing zone first, point solutions second.

Fixing the data problem: domain-led connectivity

Of course, none of this works without data.

Most banks have done some work on APIs, but data still lives in silos across multiple cores, payment engines, CRMs and a growing collection of fintech solutions.

Deepak argued that AI is forcing banks to finish the API job they started. It’s no longer a “nice to have”, because large language models need consistent, well-described interfaces to interact with systems effectively.

Backbase’s answer is what they call domain-led connectivity:

  • Break the bank into clear domains: deposits, loans, payments, cards, fraud, etc.
  • Standardise the API contracts for each domain regardless of which vendor sits behind it.
  • Use an industry domain model (Backbase uses BIAN – the Banking Industry Architecture Network) as the common language across systems.

Once those APIs are standardised, they can be exposed to models as MCP servers (Model Context Protocol), with fine-grained control over which operations are available to which agents. Initially, you may only expose low-risk operations, then gradually expand as governance matures.

The outcome is that AI Agents can “see” and act across domains without bespoke plumbing each time and data access is controlled centrally rather than hacked together case by case.

A concrete example: supercharging the relationship manager

To make this platform approach real, Deepak walked through a high-value use case in wealth management.

Think about a relationship manager (RM) working with high-net-worth clients. Preparing for a portfolio review usually means: reviewing the client’s current portfolio, checking recent market movements, understanding product performance and risk and finally recalling the personal context of their last conversation eg: family, life events etc.

Traditionally, this prep can take days. With an AI-powered platform and the right agents in place, the process looks very different:

  1. The RM logs into their familiar portal.
  2. Behind the scenes, agents pull:
    1. Portfolio data via unified APIs
    2. Market and research data from external sources
    3. Past interaction history and notes from the CRM The agent then compiles:
  3. The agent then complies
    1. A tailored briefing pack for the RM
    2. Suggested talking points and objections to anticipate
    3. Personalised touches like: “How was your holiday in Malta?” based on previous interactions
  4. Sentiment analysis and tone adjustment further refine how recommendations are framed.

    The result is not a bot replacing the RM, but an RM who is 10x better prepared, more relevant, and more human in the time they have.

    Deepak shared that this exact use case is already live on the Backbase platform!

    Making non-deterministic systems trustworthy

    However none of this matters if regulators and risk teams don’t trust the outputs.

    Deepak broke trust down into two areas:

    1. Reducing non-determinism through evaluation and observability

    During development and after go-live, you need continuous feedback on how agents behave. Deepak described an approach that combines:

    • Domain experts reviewing real interactions, scoring responses as good/bad and annotating them
    • “LLM as a judge” where specialist AI models help assess other models’ outputs at scale
    • Dedicated tools (e.g. Langfuse, Langwatch, Arise, and hyperscaler eval tools) to track performance, errors and drift over time

    This is essentially testing for the AI-native world:

    • In cloud-native systems, we relied on automated tests and observability for microservices.
    • In AI-native systems, we rely on evaluation + observability loops for agents and models.

    2. Building a control framework aligned with regulation

    On the governance side, Deepak suggests looking at architecture across four dimensions:

    • Application
    • Model
    • Data
    • Infrastructure

    And then mapping each one against frameworks and regulations such as the EU AI Act, NIST AI RMF, GDPR and DORA, covering topics like data governance, model risk, explainability, operational resilience and human oversight.

    Practically, that leads to controls like: secure SDLC for AI, agents running in sandboxes, an AI gateway to manage access to models and enforce guardrails and ephemeral access to data where possible.

    He also brought up the increasingly popular topic of red teaming, which involves intentionally attacking your own agents using tools like Promptfoo and testing for harmful content, bias, prompt injection and jailbreak attempts before going live.

    The key is that all these tools must be part of the AI landing zone, not something individual teams bolt on later.

    AI across the software development lifecycle

    One of the more interesting parts of the conversation was how Backbase is applying AI to its own software development lifecycle (SDLC).

    Developers already use tools like GitHub Copilot or other AI pair-programmers, but what about everyone who appears before the developer like: Business Analysts, Product Managers, Architects, Testers, UX Designers and more?

    Backbase has adopted and industrialised modern methodologies like Spec kit and the BMAD framework (Breakthrough Method for Agile AI-Driven Development) and Agent Skills to create an agentic SDLC, where each persona has a dedicated agent embedded into their tools and workflows.

    Examples include:

    • A BA agent that helps shape requirements, user stories and non-functional requirements
    • An architect agent that can propose architecture options given constraints
    • A QA agent generating and refining test cases
    • A UX agent that can reason about design choices, even from screenshots of other sites

    Backbase is already using this internally and exposing it to banks via its platform (and Deepak shared that he is using it himself on a few personal projects of his own), but the pattern is reusable for any bank’s internal IT organisation.

    The message here is clear:

    If you want AI to power your growth engine, don’t just apply it to customer journeys, apply it to how you build those journeys in the first place.

    A smarter path to core modernisation: hollowing out, not ripping out

    The conversation closed on a very practical pain point and one that frequently arises at our own events: core modernisation.

    Replacing the core is often described as “open-heart surgery”: high risk, high cost, and disruptive.

    Deepak’s suggestion is a more iterative, domain-led approach he calls “progressively hollowing out the core”.

    The idea:

    Unify the domain first

    • For example, standardise the “deposits” domain via unified APIs.

    Route traffic intelligently

      • Some operations still hit the legacy core.
      • Others are gradually routed to new cores or specialist fintechs (e.g. payments, fraud, identity).

      Use an integration and eventing layer

      • Bi-directional connectivity means changes in any system (core, CRM, fraud engine) can trigger events and sync across the landscape.

      Backbase does this in practice with a curated ecosystem of best-of-breed vendors, for instance working with Feedzai for fraud, which Deepak noted is also selected by the European Central Bank for the digital euro, but the pattern itself is vendor-agnostic.

      The benefit for AI is obvious:

      • Cleaner, domain-led APIs
      • Less brittle integration work every time you want to introduce a new AI-powered journey
      • The ability to evolve the core without constantly breaking the experiences on top

      So what should banking leaders do next?

      If you strip the conversation down to its essentials, Deepak’s advice to leaders came down to a few concrete points:

      1) Invest in an AI landing zone, not isolated pilots

      • Bring together model serving, agent frameworks, governance, observability and evaluation as shared capabilities.

      2) Establish AI platform governance

      • Identify and clearly communicate the person, department, or function that is responsible for the AI landing zone, and give them the mandate to establish best practices, enforce standards and enable the rest of the organisation.
      3) Finish your API and domain work
      • Standardise APIs around clear banking domains and industry models like BIAN so agents can operate safely and consistently.
      4) Embed evaluation, observability and red teaming from day one
      • Treat AI development like a continuous feedback loop, not a one-off project.
      5) Apply AI to your own SDLC
      • Use agentic approaches to speed up analysis, architecture, testing and UX – not just the customer interface.

      6) Modernise the core iteratively

      • Hollow it out domain by domain, using an integration and eventing layer to keep channels stable while you evolve the back end.

      Do this, and AI stops being a collection of demos and becomes what it should be: a genuine growth engine that helps your bank serve, sell and innovate at the speed customers now expect.

      The Banking Scene: Director's Cut

      This really was a session that was jam-packed with insights and good advice. It was a lot to take in (and summarise) so I highly recommend watching / listening to the interview below or following along on your favourite podcast channel here (don't forget to subscribe!).

      Share this via
      © Copyright 2026 The Banking Scene - All rights Reserved.