Fri, 28 Nov 2025
In a sector built on trust, stability, and compliance, the rapid adoption of artificial intelligence presents both a great opportunity and a formidable challenge for financial services. As generative AI tools race ahead, many banks are left wondering how to balance innovation with governance, ethics, and risk.
To help us navigate these challenges, I sat down with Seppe Housen, Senior Responsible AI Engineer, and Dimitri De Rocker, AI Director at Datashift, both of whom are helping financial institutions embed responsible AI practices across their organisations.
The key takeaway: the stakes are high, but with the right approach, responsible AI can become a competitive advantage, not a compliance headache.
Many banks already have extensive governance and risk frameworks in place. So why isn’t that enough for AI?
“Traditional governance is largely built around deterministic systems,” Seppe explained. “AI systems, on the other hand, are data-driven and probabilistic. They rely on patterns and predictions, not fixed rules. That makes the outcomes more complex, less predictable, and potentially more prone to bias or misuse.”
This difference means that existing governance remains relevant, but not sufficient. Especially as generative AI increases accessibility and experimentation across non-technical teams, it becomes essential to extend and adapt governance models to these new realities.
Financial institutions have always been adept at managing risk and regulatory compliance. But according to Seppe, that’s a double-edged sword.
“Yes, banks have the right mindsets and teams in place. But they’re not starting from a clean slate,” he said. “They already have complex governance structures. That makes it harder to integrate new AI-specific processes without duplicating effort or creating friction.”
The key is integration. Responsible AI shouldn’t be an add-on or a bottleneck. It needs to be woven into existing processes and roles to ensure uptake, scalability, and long-term viability.
When asked which responsible AI principles banks should prioritise: fairness, transparency or explainability, Seppe didn’t hesitate.
“They all matter. But the most important principle is trust. Banks exist to be trusted stewards of people’s money and data. That means being fair, transparent, secure, and robust, whether AI is involved or not.”
He made a key distinction: some principles, like transparency, should always be present, regardless of the use case. Others, like explainability, may depend on the risk and complexity of the system in question. After all, not every algorithm needs to be interpretable to the nth degree, but every system does need oversight.
Dimitri added that a bank’s risk appetite also plays a role in how these principles are applied. “There’s regulation, yes. But there’s also the institution’s own values and priorities,” he said. “Those should guide where to invest most.”
So how do banks build risk management frameworks that balance compliance, innovation, and operational reality?
“It starts with making things explicit,” said Seppe. “Terms like ‘risk’ and ‘innovation’ are too abstract. Teams need concrete, actionable guidance tailored to AI.”
That includes:
One critical insight? Risk assessments must be early, broad, and collaborative. Waiting until deployment to conduct a review is a recipe for delay or failure.
The short answer: everyone, but not equally.
Seppe outlined a three-part governance structure:
“Each of these groups has a critical part to play,” he said. “You can’t outsource responsible AI to one department. But ultimately, the business must own the risk decisions. They decide what’s worth doing, how much risk to accept, and where to invest in mitigation.”
For higher-risk use cases, especially those falling under the upcoming EU AI Act, escalation to a senior ethics or compliance board may be necessary. But banks will need pragmatic processes to handle the potentially thousands of lower-risk AI use cases emerging across the business.
So how do you scale responsible AI beyond a slide deck?
According to Dimitri, the biggest hurdle is connecting the right people, many of whom don’t naturally work together.
“AI involves everyone: from data scientists to marketers to compliance officers. Most of them speak different languages. You need a clear framework, defined roles, and someone to coordinate it all.”
Seppe agreed, highlighting two levers for success:
So what’s standing in the way of responsible AI today?
Datashift’s response is to focus on efficiency and early engagement.
“Shift risk left,” Seppe advised. “Don’t wait until a model is ready for deployment to start asking questions. Involve legal, security, and compliance early, so issues can be addressed without derailing timelines.”
Also, group similar AI use cases (like marketing models) under shared governance templates. “You don’t need to reinvent the wheel every time,” he said. “If 50 models use the same logic, assess them once and scale your controls.”
To wrap up, I asked each of them the same question: If banks only focus on one area of responsible AI over the next year, what should it be?
Seppe’s answer: Start with early-stage, risk-informed use case assessments.
“Your people already know which use cases matter most. Begin there. Bring everyone to the table: business, tech and compliance and ask: what are we building? What could go wrong? What do we need to get right?”
Dimitri’s take: Alignment is key.
“Everyone needs to be on the same page about risks, responsibilities, and priorities. If you don’t align early, you’ll spend six months chasing issues instead of solving them.”
If you’re keen to take your responsible AI journey further, Seppe and Dimitri have shared a free resource outlining Datashift’s three-step approach for banks looking to embed governance into their AI efforts, which you can download here.
AI in banking isn’t going away anytime soon. But as it becomes more embedded across the organisation, it’s vital that the systems we build reflect the values we stand for.
Responsible AI isn’t just about ticking boxes. It’s about building systems and a culture that protects the trust banks have spent decades earning.
And if there’s one thing to remember from this conversation, it’s this: you don’t have to start from scratch.
You just have to start smart.
Watch / listen to the full discussion below, or follow on your favourite podcast platform here.