Using AI in Finance? Consider These Four Ethical Challenges

Pawel Stezycki

Jun 15, 2021 • 9 min read
woman paying online

A few weeks ago, news about regulations limiting AI leaked from EU lawmakers and made headlines. Now, their proposal is an official hundred-page document. Why is the EU taking aim at AI?

? Should we be worried? Will supertbots become a threat? Was Terminator just a documentary movie from the future?

By now, we know artificial intelligence has the ability to be a world-changing force, but fortunately, it’s still far from becoming a self-aware super AI. It’s more about the power it can give and the money – AI has the potential to deliver additional global economic activity of around $13 trillion by 2030.

In global banking, it is predicted to deliver up to $1 trillion of additional value each year. More than half of this value will come from gains in sales and marketing activities, such as better customer service management and channel management, followed by advances in risk functions like fraud and debt analytics.

The benefits of AI are undeniable. From improved fraud detection and enhanced anti-money laundering (AML) processes to faster underwriting and more efficient customer servicing, intelligent automation is transforming the financial services industry.

However, its use also comes with a serious risk of unwanted consequences – some not yet even envisaged – which could have an influence on society and the way we live. That’s why whenever we undertake AI-related projects at Netguru, we look at them from various perspectives, consider their challenges, and examine the impact these projects may have.

In this article, we look at four key ethical questions raised by advances in artificial intelligence and invite you to consider the implications in your own projects.

1. Where is the line between making a suggestion and influencing a choice?

In our daily online activities, we grant access to a tremendous amount of data. AI can easily track it, process it, and effectively influence our decisions. Ad-driven marketing has been doing it for products and politics for decades, but now, AI has become alarmingly good at it.

Paweł Stężycki on ad-driven marketing

Since we cannot control who uses AI algorithms, it seems we are at the mercy of any organization using it for any purpose. The Cambridge Analytica scandal – a data breach used to target voters in the US presidential elections – was one of the first wake-up calls to the dangers of unscrupulous data use.

Fortunately, AI is only powerful when supplied with vast amounts of relevant data, but this puts the biggest social media and e-commerce companies under the spotlight. The recent EU proposals are clearly aimed at tempering these companies with fines reaching up to 6% of their worldwide annual turnover. Regulations will impact banking and fintech, forbidding the use of AI to evaluate social credit scores – assessing a person’s trustworthiness by their social behavior or predicted personality traits – or the CVs of candidates applying for a job.

The EU government follows the logic that predictive AI algorithms based on other people's decisions should not be used freely to take critical decisions considering individuals. That would be too much, possibly unfair, influence on how society works.

2. How do we stop AI being used for malicious purposes?

AI has been a boon for innovative banks that leverage it to protect their clients’ money and to enforce AML laws and regulations. Every day, bots review millions of operations to monitor for suspicious patterns and report them to cybersecurity and AML officers.

But AI is a double-edged sword. The same technology that protects us can be used by cybercriminals to exploit us, and using AI to create individually targeted attacks at scale could prove to be very effective. For example, imagine an AI-driven conman chatting with millions of people at once and training to be more effective with every conversation made.

And it’s not just deliberate malicious actions that we need to be concerned about – unintended side-effects can be just as alarming. Consider that in a bid to maximize viewing time, YouTube’s algorithm learned to discredit other forms of media by promoting anti-media content. It would have achieved its goal of keeping viewers on the platform, but at what social cost?

As we develop new solutions, how do we ensure they are only used for good? Boston Dynamics, for instance, strictly emphasizes in its terms of sale that its machines cannot be used as weapons. Maybe it’s time to consider a similar approach and tracking of algorithms.

3. How do we prevent machines from causing harm?

In 2018, an Uber self-driving test vehicle hit and killed a pedestrian as she was crossing the road. Findings by the US National Transportation Safety Board revealed that the car failed to identify the pedestrian as a collision risk until just before the impact. And more recently, two men were killed when their Tesla vehicle, which is believed to have been driverless, hit a tree and burst into flames.

These examples of flawed automation software causing harm are shocking, and as autonomous cars become more widely available, we may see more fatal accidents. But even in financial services, AI poses a very real risk of harm. In a survey by the World Economic Forum, 58% of respondents expressed concern that the mass adoption of AI would increase the risk of bias and discrimination in the financial system.

For example, using a machine learning algorithm to make credit decisions could inadvertently build in human biases included in historical data. As a result, people of color, young people, or single female applicants could be unfairly disadvantaged. Which begs the question, who is ultimately responsible for the harmful errors of AI-driven software? Should we hold the producers, developers, or testers accountable?

Again, new laws are underway. Financial institutions should expect this may also apply to robo-advisory of high-frequency algorithmic trading. These solutions are likely to be considered high-risk in terms of market stability from a regulatory point of view and, as such, will undergo risk assessment and mitigation processes.

4. How do we safeguard jobs and balance the distribution of wealth?

With the prospect of advanced automation and the subsequent efficiency gains comes the threat of job losses for millions of office and backend workers. Banking may be particularly hard hit. According to a report by Wells Fargo, 200,000 banking jobs will be lost to robots over the next decade in the United States alone.

So instead of thousands of employees being taxed and distributing their income locally, we will have algorithms developed in the IT labs of the biggest companies on the planet. Maybe we should tax automatic labor the same as we do human work? What will happen to the workers we no longer need? This question in particular has prompted many to call for a universal basic income to protect displaced workers.

Ethical aspects aside, there are also questions regarding democracy and the economy. Who will create consumer demand if there is no 'middle' class? What tensions in society will we generate?

Securing the future (before it’s too late)

With AI solutions now widely available, ethical questions about their use are becoming increasingly relevant. Although AI is an unprecedented opportunity for transformation, progress is happening at an incredible pace, and unfortunately, humanity tends to see the adverse consequences of technological breakthroughs too late.

Paweł Stężycki on ethics and values

So while governments and regulators will eventually have the final say in how we use AI, for now, we all have a role to play in upholding the ethics and values that will allow AI to deliver on the promise of a better future.

More posts by this author

Pawel Stezycki

Senior Innovation Consultant
New call-to-action