Insights & Opinions

The AI Stratego - Defining The Rules for Banking

Mon, 10 Jun 2024

assets/site/Andrew-Vorster-sq.jpg
Andrew Vorster Head of Growth The Banking Scene

The AI Stratego Defining The Rules for Banking featured
“It’s inevitable that at some point, our industry will experience an “oh sh*t” moment when it comes to AI.”


For me, the comment above summed up WHY we need to continue to have conversations and panels like the one I had the honour of moderating recently at The Banking Scene Conference Brussels. Valérie Hoess, Head of European Affairs, Commerzbank, Joris Krijger, AI and Ethics Officer, De Volksbank and Anthony Morris, Chief Industry Transformation Officer, nCino, shared their insights on the key considerations for banking and financial services professionals integrating artificial intelligence into their organisations.

Most financial services organisations have already deployed some form of AI, from predictive analytics and advanced data modelling, to robotic process automation and customer service chatbots, the use cases for AI have been steadily growing over the last decade or so.

Indeed, 96% of the audience that responded to a Slido poll during the session, stated that their organisation currently makes use of AI in some form.

To date, the focus has primarily been on improving efficiency and making things easier for humans, but the recent advent of Generative AI (in the form of Chat GPT for example), has seen an explosion of possible use cases and an unprecedented demand for experimentation driven by both bank employees and their customers.

This is what led Anthony to make the opening statement above.

His concern is that at some point in the not-too-distant future, there will be a “tectonic event”, where AI will make a decision that will have a significant, negative impact on the balance sheet, which will drive regulators into overdrive and see the shutters come down on the technology. This set the tone for the rest of the session as we explored how we could embrace the opportunities without risking the very foundations of our industry.

Regulation, Risk Avoidance and Risk Mitigation

Valérie was quick to jump in to strongly disagree with the underlying assumption that this is “all happening in the wild west”. She pointed out that, particularly in the financial services industry, there are well established rules that apply on how we deal with data, assess credit risk and affordability, and how we develop, purchase and implement technology.

These rules apply no matter what technology we are dealing with.

She also noted that the recent EU regulation on AI focuses primarily on LLM’s (Large Language Models) and Generative AI, which while relatively new, are only a subset of the topic of Artificial Intelligence and by no means encompasses the entirety of what we call AI. Regulation needs to constantly adapt and while we might feel there is a never-ending stream of regulatory updates for us to deal with, it is necessary for the protection of both ourselves and our customers.

Perhaps the greater challenge faced by financial institutions is the added complexity of overlapping regulations. For example: if your AI models are deployed in the cloud, then what aspects of DORA and GDPR do you need to take into consideration in addition to the newly released AI regulation?

This raised the question of whether regulation is perceived to be holding back innovation and particularly AI adoption, which prompted the next Slido poll for the audience asking “What is preventing (further) AI adoption in your organisation?”.

To my surprise, only 13% of the respondents thought regulation was preventing further AI adoption, with 15% saying cost and 20% saying talent was holding them back.

53% of the respondents voted “unknown risk” as the biggest barrier to further AI adoption.

For many years, Risk and Compliance departments in banks have simply said “no” to any unknown risks, giving rise to a Risk Avoidance mindset, as the best way to avoid any risk is simply not to do anything! The panellists all agreed that moving forward, financial institutions need to adopt more of a Risk Mitigation mindset to leverage the opportunities presented by AI.

This requires close co-operation with the risk and compliance department from the outset and not as an afterthought.

However, this advice came with a caveat.

Compliance and Ethics

In a fantastic soundbite, Joris cautioned against AI use cases that were “lawful but awful” – in other words, be careful about things you can do that are not explicitly prohibited by regulation, but at the same time are ethically questionable.

We have had many deep discussions about ethics with Joris in the past and on this occasion he stressed the need for an ethical infrastructure that goes beyond simply providing a framework to developers that states the models need to be fair, explainable and accountable. He went on to explain the importance of having an ethical infrastructure to identify the ethical questions related to the use of AI, to manage them effectively and have some form of deliberation to resolve the ethical issues as there is never a single “right” answer. An ethical infrastructure should encompass policy, governance, tools, training and assessment and is more of an organisational challenge than a technical challenge to embed the concept of ethics into the DNA of an organisation.

Valérie reinforced the importance of cultivating a company culture that enables and encourages people to ask questions and navigate grey areas. She expressed her appreciation of the AI Ethics guidelines published by the European Commission, not because they explicitly state what can and can’t be done (which they don’t), but rather because they provide you with questions to ask yourself throughout the process of developing and deploying artificial intelligence. Valérie also cautioned against striving for “audit proof” ethical guidelines, stating that if your own guidelines are too strict, people will tend towards doing exactly what is required, nothing less, but potentially nothing more, which would constrain them thinking beyond simply what is allowed.

Moving forwards

As AI adoption increases across the industry, most companies will be looking to leverage external platforms to build out their capability, largely due to the costs associated with building their own technology infrastructure and the scarcity of GPUs required for Generative AI implementations. This brings vendor due diligence into sharp focus and the need to sit down with providers and a reach a common understanding of what the regulations mean, both in terms of interpretation and in terms of implementation, was strongly advocated across the panel.

However, I was curious in closing to hear from the audience via one final Slido poll if in the longer term, they thought Artificial Intelligence would turn out to be:

  1. Just another overhyped fad
  2. Another regulatory headache
  3. Business as usual
  4. The biggest game-changer the industry has ever seen

As 62% of the respondents thought Artificial Intelligence would turn out to be the biggest game-changer the industry has ever seen, Anthony urged us all to consider if the future focus should not be on a risk mitigation vs risk avoidance approach, but rather on the opportunity cost of not embracing the capabilities of AI to its fullest extent across our industry?



(Download our white paper "Generative AI in Benelux Banking: Opportunities, Challenges and Future Outlook" for a consolidated collection of insights shared by Benelux bankers across 1:1 interviews, keynotes, panels and executive round tables we hosted)

Share this via

Comments

© Copyright 2024 The Banking Scene - All rights Reserved.