Using AI in Finance? Consider These Four Ethical Challenges

Photo of Paweł Stężycki

Paweł Stężycki

Updated Sep 27, 2023 • 9 min read
ai_in_fiance_top_ethical_challanges

A few weeks ago, news about regulations limiting Artificial Intelligence (AI) leaked from EU lawmakers and made headlines.

Now, their proposal is an official hundred-page document which covers systems, processes and development of AI.

Why is the EU and the European Commission taking aim at AI? Should we be worried? Will supertbots become a threat? Was Terminator just a documentary movie from the future?

By now, we know artificial intelligence and related technologies - such as machine learning algorithms - have the ability to be a world-changing force, but fortunately, it’s still far from becoming a self-aware super AI system. This technological change means more for the power it can give and the money – AI systems have the potential to deliver additional global economic activity of around $13 trillion by 2030.

In global banking, AI technology is predicted to deliver up to $1 trillion of additional value each year for finance professionals and organizations, according to McKinsey & Company . More than half of this value will come from gains in sales and marketing activities, such as better customer service management and channel management, followed by advances in risk management functions like fraud and debt analytics, ultimately informing better decisions in securities trading, asset management, portfolio management and overall business decision making.

The benefits of AI to business are undeniable. From improved fraud detection and sorting of unstructured data, to enhanced anti-money laundering (AML) processes, faster underwriting and more efficient customer servicing, intelligent automation, underpinned by AI is transforming the financial services industry.

On the flip side, however, its use also comes with a serious risk of unwanted consequences – some not yet even envisaged – which could have an influence on society and the way we live. That’s why whenever we undertake AI-related projects at Netguru, we look at them from various perspectives, consider their challenges, and examine the impact these projects may have and whether they can be considered unethical.

In this article, we look at four key ethical questions raised by advances in artificial intelligence and invite you to consider the ethical issues and challenges in your own technology projects involving AI systems and machine learning.

1. Where is the line between making a suggestion and influencing a choice?

In our daily online activities, we grant access to a tremendous amount of data. AI can easily track it, process it, and effectively influence particular decisions. Ad-driven marketing has been doing it for products and politics for decades, but now, AI systems have become alarmingly good at it, raising questions around ethics, risks and responsibility.

AI used to track data quote

Since we cannot control who is using AI algorithms, it seems we are at the mercy of any organization using it for any purpose. The Cambridge Analytica scandal – a data breach used to target voters in the US presidential elections – was one of the first wake-up calls to the dangers of unscrupulous use AI and big data.

Fortunately, AI is only powerful when supplied with vast amounts of relevant data, but this puts the biggest social media and ecommerce companies under the spotlight. The recent EU proposals are clearly aimed at tempering these companies with fines reaching up to 6% of their worldwide annual turnover.

Regulations will impact banking and fintech, forbidding the use of AI to evaluate social credit scores – assessing a person’s trustworthiness by their social behavior or predicted personality traits – or the CVs of candidates applying for a job.

The EU government follows the logic that predictive AI algorithms based on human judgment should not be used freely to take critical ‘black and white’ decisions considering individuals. That would be too much, possibly unfair, influence on how society works.

2. How do we stop AI applications being used for malicious purposes?

AI has been a boon for innovative banks and financial services companies that leverage it to protect their clients’ money and to enforce AML laws and regulations. Every day, bots review millions of finance operations to monitor for suspicious patterns and report them to cybersecurity and AML officers.

A quote about the possibility to use AI in cyberattacks

But AI is a double-edged sword. The same technology that protects us can be used by cybercriminals to exploit us, and using AI techniques to create individually targeted attacks at scale could prove to be very effective. Imagine an AI-driven conman chatting with millions of people at once and training to be more effective with every conversation made.

And it’s not just deliberate malicious actions that we need to be concerned about – unintended side-effects can be just as alarming. Consider that in a bid to maximize viewing time, YouTube’s algorithm learned to discredit other forms of media by promoting anti-media content. It would have achieved its goal of keeping viewers on the platform, but at what social cost?

As we develop new AI systems, how do we ensure they are only used for good? Boston Dynamics, for instance, strictly emphasizes in its terms of sale that its machines cannot be used as weapons. Maybe it’s time to consider a similar approach and tracking of algorithms, with ethical considerations built into their very design.

3. How do we prevent technology from causing harm?

AI models have increasingly been used to help develop autonomous vehicles. In 2018, an Uber self-driving test vehicle hit and killed a pedestrian as she was crossing the road. Findings by the US National Transportation Safety Board revealed that the car failed to identify the pedestrian as a collision risk until just before the impact. And more recently, two men were killed when their Tesla vehicle, which is believed to have been driverless, hit a tree and burst into flames.

These examples of flawed automation software in AI systems causing harm are shocking, and as autonomous cars become more widely available, we may see more fatal accidents arising from technology. But even in financial services, AI poses a very real risk of harm. In a survey by the World Economic Forum, Transforming Paradigms: A Global AI in Financial Services Survey, 58% of respondents expressed concern that the mass adoption of AI would increase the risk of bias and discrimination in the financial system.

For example, financial institutions or FinTech companies using a machine learning algorithm to make credit limit decisions could inadvertently build in human biases included in historical financial data.

As a result, people of color, young people, or single female applicants could be unfairly disadvantaged by the application of machine learning with faulty data. Which begs the question, who is ultimately responsible for the harmful errors of AI-driven software? Should we hold the producers, developers, or testers responsible for new technologies? Sometimes, the computing systems used in AI models can make unexplainable decisions. This is called the Black Box effect.

Again, new laws are underway in an already highly regulated industry. Financial institutions should expect this may also apply to robo-advisory of high-frequency algorithmic trading. These solutions are likely to be considered high-risk in terms of market stability from a regulatory point of view and, as such, will undergo risk assessment and mitigation processes.

4. How do we safeguard jobs and balance the distribution of wealth?

With the prospect of advanced AI automation and the subsequent efficiency gains comes the threat of job losses for millions of office and backend workers. Banking may be particularly hard hit. According to data released in a report by Wells Fargo, 200,000 banking jobs will be lost to robots over the next decade in the United States alone due to the introduction of AI-driven financial technology.

Instead of thousands of employees being taxed and distributing their income locally, we will have algorithms developed in the IT labs of the biggest companies on the planet. Maybe we should tax automatic labor the same as we do human work? What will happen to the workers we no longer need? Will they become the ‘human copilot’ to AI applications?

The question of replacing human decision makers in particular has prompted many to call for a universal basic income to protect displaced workers and ethics frameworks to guide AI development.

Ethical aspects aside, there are also questions regarding democracy and the economy. Who will create consumer demand if there are no consumers? What tensions in society will we generate?

Securing the future (before it’s too late)

With AI solutions now widely available, ethical questions about their use are becoming increasingly relevant. Although AI is an unprecedented opportunity for transformation, progress is happening at an incredible pace, and unfortunately, humanity tends to see the adverse consequences of technological breakthroughs too late.

So, while governments and regulators will eventually have the final say in the ethical use of AI systems and algorithms, for now, we all have a role to play in upholding the ethics and values that will allow AI to deliver on the promise of a better future.

Photo of Paweł Stężycki

More posts by this author

Paweł Stężycki

Former Senior Fintech Innovation Consultant at Netguru
How to build products fast?  We've just answered the question in our Digital Acceleration Editorial  Sign up to get access

We're Netguru!

At Netguru we specialize in designing, building, shipping and scaling beautiful, usable products with blazing-fast efficiency
Let's talk business!

Trusted by: