Insights & Opinions

Implementing an Ethical AI – where do you start?

Tue, 29 Mar 2022

assets/site/Andrew-Vorster-sq.jpg
Andrew Vorster Head of Growth The Banking Scene

Banking for Good Implementing an Ethical AI where do you start Joris Krijger de Volksbank featured

WE NEED HELP!

Humans cannot possibly process the amount of data we have at our disposal, and we need help identifying patterns in data in order to gain insights and make decisions.

Investments in data science, machine learning and artificial intelligence have been steadily increasing over the last decade as we leverage the power of technology to gain efficiencies and improve customer experiences.

But how far should we go?

How much of the decision-making process should we delegate to the machines, and how can we make sure their decisions are fair and ethical?

Jack Clark, co-director of the AI Index Steering Committee at Stanford University that produced the 2022 AI Index Report, says:

The bigger and more capable an AI system is, the more likely it is to produce outputs that are out of line with our human values”, and he goes on to say, “We’ve got systems that work really well, but the ethical problems they create are burgeoning.”

This viewpoint is shared by Joris Krijger PhDwho was our guest on last week’s roundtable.

Joris was studying economic psychology and philosophy at university when he became interested in the Financial Sector, particularly the events surrounding the 2008 financial crisis. What interested him the most was the fact that no single person could be held legally accountable and in fact he got the feeling that nobody really felt ethically or morally responsible either.

His prizewinning master’s thesis, the title of which roughly translates to “Acting Shamelessly - What the financial crisis of 2008 tells us about the effect of technology on moral responsibility”, caught the attention of de Volksbank, who invited him to join them as their AI & Ethics Officer to help them develop fair and ethical frameworks for their AI algorithms.

Joris quickly discovered that criticising what went wrong is one thing and coming up with a way to solve the problems is something completely different!

At this point, you might be thinking “but what is the problem? Don’t all banks and most businesses already have a code of conduct that also covers ethics?”.

Joris found that while existing codes of conduct covered aspects of personal integrity, whistle-blower policies and the like, they lacked practical application when it came down to coding AI algorithms.

If we take the simplest interpretation of ethics to be “doing the right thing, even when nobody is looking”, the problem comes down to who decides what the “right thing” is to do?

Right by whose judgement?

What is perceived to be morally right or wrong reflects societal values and norms, and these change over time and differ by culture and geography.

The “Trolley Problem” has been used by philosophers for many years as a thought experiment, and it has been the focus of many discussions within the self-driving car industry. In essence, if you are faced with an unavoidable collision that will result in the death of one of two groups of people, how do you decide which group of people dies?

You can see how this is relevant to autonomous vehicles in the future. If you decide to buy a self-driving car and it is faced with a real-world scenario in which either a group of school children is run over and killed, OR you and your occupants (perhaps your parents) are killed, how should the algorithm decide who lives and who dies?

Interestingly, the public responses to this problem are markedly different across cultures as in some cultures, old age is valued higher than youth and in other cultures, the inverse is true.

Perhaps an even more difficult question to answer is “and who should be held accountable and responsible for the consequences of the decision made”?

Should it be you as the owner of the car? How about the manufacturer of the car? What about the programmers of the algorithm that made the decision?

In the Financial Services world, we are now facing our own versions of the Trolley Problem.

For example, when applying AI to your loan portfolio, do you want to optimise for maximum performance for the shareholders, or do you want it to be always fair and explainable?

Who gets a loan and who doesn’t?

And who decides what “fair” is?

Is it The Board? The sales department who have targets to meet? The product owner? The customer service department who have to answer to the angry customers who have had their loan applications declined?

You certainly can’t expect the designers of the algorithms to make these determinations.

Analysing historical data will uncover many conscious and unconscious biases that have built up in the organisation over the years. There is a danger of encoding these biases within AI algorithms, giving them a seemingly objective credibility for humans to hide behind.

How often have you heard people say “sorry, the computer says you simply are not eligible, and there is nothing I can do about it”? I have had first-hand experience of this. I didn’t think my bank was fair in making their determination at the time and the person I was dealing with couldn’t (or wouldn’t) explain how the decision had been arrived at.

This incident took place many years ago, long before any thoughts of AI, and these days there is an expectation that any AI needs to be fully explainable, fair, ethical and transparent. Something that the human dealing with me wasn’t at the time!

Joris discovered that many people in Financial Services thought it should be the responsibility of the compliance department to decide what is and isn’t fair and ethical. But in his opinion, it’s “all of the above”. This led to de Volksbank establishing an AI Ethics Committee where these questions can be debated and decided on a course of action.

AI Ethics committees are fast becoming a hot topic right now as more governments around the world are publishing guidelines and passing legislation relating to the implementation of AI, for example, the European Commission’s proposal for AI regulation. These will have far-reaching impacts on any organisation that is considering implementing an AI in any part of the business.

So if you are considering implementing AI in your organisation, where should you start?

Joris believes that as clichéd as it sounds, you should start by raising awareness. Specifically raising awareness across existing data science teams and people working with data daily, to get them to realise that the data they are providing has the potential to impact thousands of people’s lives. Once they reach this realisation and begin to understand the consequences, they get uncomfortable with making fairness decisions and begin to look for guidance from an organisational perspective.

My personal opinion is that the very first step you should take is to hire a philosopher onto your team to help you consider the uncomfortable questions …… what do you think?

Share this via

Comments

© Copyright 2024 The Banking Scene - All rights Reserved.