Ethics and Inclusion in AI - Interview with Yonah Welker

Photo of Dominika Błaszak

Dominika Błaszak

Aug 3, 2021 • 16 min read
ethics_and_inclusion_ in_ai

Artificial Intelligence is becoming an increasingly important part of our lives, underpinning many of the products and services we use every day.

But how do we ensure ethics and inclusion in AI?

In addition to adding convenience to our personal and working lives, AI systems have the power to help us address urgent global challenges, from disaster awareness and prediction to alleviating global hunger and powering clean energy solutions.

So far, so good. But AI ethics and inclusion are vital considerations when we look at applying AI development to these issues. If we don’t proceed with caution when developing AI, there can be unexpected consequences, such as ethical issues surrounding bias, diversity and inclusion.

For the sake of present and future generations, it is our duty to deploy AI in a safe and responsible way which represents everyone in society - not just the privileged few.

To find out how we can do that, we spoke to Yonah Welker. Yonah is a technologist and explorer currently looking at problems of accessibility, algorithmic diversity and and policy. In this interview, Yonah talks about their human-centred AI ethics program, the importance of an international, ethical framework for AI, and how developers can avoid bias when developing AI solutions.

But first, let’s take a look at the challenges standing in the way of ethics and inclusion in AI.

The challenges of ethics and inclusion in AI

Not only does AI create challenges around the future of labor and potential mass unemployment, but the uneven access to AI-powered services and technologies risks exacerbating existing global inequalities. Groups with uneven access include poor communities, women, LGBTQ individuals, ethic and racial groups and people with disabilities.

Algorithmic bias is a key ethical concern when it comes to AI, given the algorithms that underpin machine learning and AI systems. Algorithmic bias describes systematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others.

As Yonah will go on to explain, AI is developed by human beings. And as we all live in a world in which certain groups are underrepresented, excluded or discriminated against, it stands to reason that AI is at risk of reflecting that reality. How can inclusive AI systems be built when the teams developing them are not inclusive?

If these challenges are not addressed, there is a risk of developing uneven AI systems which only represent a world of inequality, rather than helping to build a fairer, more even society. We can tackle this through regulation, frameworks, education and diversity of vision. Let’s see what Yonah has to add.

Dominika Błaszak: Could you tell us a bit about your latest project and how it started?

Yonah Welker: A year ago, I was asked to judge startups on a range of ethics criteria at the ISDI Impact Accelerator in Spain, and what we found was that, in contrast to policymakers, companies actually had no idea about these criteria. They still don't have access to open source knowledge and guidelines on how to create, for instance, robots aligned to human rights, neurodiversity or gender.

So we created this project entitled Human-centered AI and Ethics (EU), which is focused on technology, accessibility, and ethics, including women's, children's, and disability rights. The project is also known as 'smart factories' and it aims to provide human-centred technology across factories in Europe. The first pilot will involve huge organizations like Volkswagen, Bosch, and Siemens. So it's an opportunity not just to advance our academic understanding of ethics, but actually integrate these frameworks into the manufacturing process.

What is the project's aim?

On the one hand, I want this project to provide companies with a general understanding of AI, ethics, and responsible development. But I also want to start a discussion on the next stage of AI, such as how we can make technology accessible for disabled people and solve the underrepresentation of women in the workplace.

It's about making AI more aligned, because in most cases it's impossible to create products for women or disabled people if we have no such people on our boards or in our research teams.

Should there be international regulations on AI?

We are currently seeing a rise in many community-focused policies and guidelines. For example, in Europe, the Women in AI group tends to focus more on climate change and health care, but in the United States, they focus more on social justice, equality, and human rights. When we try to introduce technology in different countries, we face different challenges depending on the communities we reach.

On the one hand, we should create these policies, but at the same time make them in a very democratic, decentralized way, so that all of the agents of the market not only policy makers, but researchers, innovators and entrepreneurs are able to participate in the discussion in order to share their feedback, because that's what human-centered design is. It's about constant human feedback, so that all stakeholders and all agents of the market are able to share their opinion.

Why is it so hard for women to be in this specific sector, to the point that they feel they need a special community only for women?

Nowadays we typically try to talk about inclusion, but that's not the key issue. The biggest issue is that our vision of the world is still based on a post-colonial regime. Most technologies, institutions and universities were created by a particular type of white male mindset, to simplify it. Most of the ecosystems, like Silicon Valley, were created in this way, and we've exported this model everywhere.

Up until now, all of our industries have been driven by people who just want to make money.

As a result, most of the startups we have today are just unicorns which have no value whatsoever for society. They only have value for a particular type of person.

If you go to female-driven hackathons, they cover topics like health care, justice, and quality education. But if you go to the male ecosystems, they tend to explore much more money-oriented topics like finance and insurance.

So, how can we make AI more inclusive?

We have whole global companies driven by one single mindset and language. We've created some kind of truth where Silicon Valley or venture capital is seen as good. But maybe science is good, maybe empathy is good, maybe quality is good.

Until we are able to deconstruct this monopoly of vision, we won't be able to solve many problems. Until we are able to create competition within ecosystems where we have diverse stakeholders, we won't be able to share how these groups can drive innovation, not just revenue. People are now trying to reshape the entire ecosystem from scratch, because they believe it's impossible to try to fit into a system that excluded them from the beginning.

What does human centricity really mean?

Human centricity is our ability to involve human feedback or stakeholder feedback at every phase of development.

We typically consider human centricity as more of a development or research technique, but the problem is that this doesn't work until we make the whole organization human-centric.

I recently started working on a human-centric protocol for a startup. One of the fundamental foundations of this vision is coming up with cultural and accessible moral vocabulary and embedding the ethical framework at every stage of the organization. For instance, if your developers know about your ethical considerations, but your salespeople have no idea, the message on your landing page will never be correct. It's about the whole cycle.

What are the implications of prioritizing technical knowledge over cultural knowledge?

While people with a technical background currently lead the market in terms of the salaries and respect we give them, we're not able to create a product if we're not actually able to understand customers, both from a cognitive and cultural perspective. That's why I attach significant importance to people who are able to provide high-quality ethnic and gender research, like social scientists.

The bigger problem here is that we teach our kids how to understand the market instead of how to understand the world.

We create a kind of market of dilettantes who have a pitch deck and statistics, but we have no idea what they're talking about.

A brilliant example of how to be a successful dilettante is Elon Musk. We have some rich white male who always has an opinion on everything, who knows how to fix the world or how to make our oceans clean in just a few days. But without the right knowledge or education, you create companies which just deliver zero value for people.

How can we move away from this market of dilettantes, as you put it, and create companies which do deliver value for people?

In the UK, there's an extremely successful new type of startup accelerator. It's called Entrepreneur First and in just a few years they've attracted USD 130 million because they've started using people with a PhD to actually solve problems. They've realized that our key assets are our brains, our empathy and our talent, not just some kind of abstract ambition.

In human-centered design, we need to actually have people who know about gender studies, social studies, and cultural studies, who can create this deep type of research for developers and work with them to produce ethical products that actually solve the problems of minorities and disabled people.

At the moment, companies claim to be ethical, but it's hypocrisy and I really don't like it. That's why in my work I really try to rely not on the people who have just started to care, because I believe that they still don't care. They just don't want to lose their markets or their reputation.

How can we support, not replace, humans in the shift to AI?

So when we talk about human-centered technology, AI, and ethics, we talk about ethical frameworks, transparency, and accountability. And as long as we continue to make people accountable, our world will remain human-centric, because robots are still just a tool. They're never a replacement for humans. The main reason why people as a species have evolved and become the center of our world is their ability to deal with tools, build communities, and be creative.

How can we minimize the dangers from the supremacy of algorithms, black boxes, and biases?

First of all, fixing algorithmic bias does not mean fixing algorithms, but actually fixing the culture of our organization, which as I mentioned before involves creating balanced companies.

Another problem is the black box. Currently open source has become a huge thing, and I'm a huge advocate of it, but at the same time we have kind of a decrease in competence and knowledge. In most cases, people work with the data sets or libraries but have no idea how they work. Bridging the gap between access, skills, knowledge, and education is a huge problem behind the black box.

There's also the problem of technical fixes. Say if there's fake news on a platform: we tend to just create another feature in our backlog, deploy it, and potentially create even more bias than we had before. This is how many platforms and technology companies work and it's a typical situation because it's created by developers. We have no idea how to solve something in a different way because we believe that developers are just the kings of this world and that everything should be simply fixed by an update. But the whole vision behind this is completely wrong.

How can developers avoid bias when developing AI solutions?

First, there are some brilliant frameworks, such as those produced by Deloitte or the Turing Institute. Depending on your niche vertical, find a template and use it as a starting point.

Second, start to reshape your culture and your team. Come up with a vocabulary related to your niche and make sure you understand digital citizenship, because these days everyone is a blogger. Then, it's about introducing the vocabulary, actually using it, understanding it, and practising it at every stage of development.

After that, we can introduce all the elements of the ethical framework, including the principle of discriminatory non-harm and the principle of accountability. It's about trying to implement accountability by design and ultimately auditing the black box.

What are some of the other current issues with AI?

Another problem in AI is the inability to deliver outcomes to different stakeholders.

You need to create a transparent system where you can interpret algorithms, results, and outcomes in different interfaces, depending on the type of internal and external stakeholders.

We use both numbers, data, and visual representation, which helps us to avoid black boxes and make the output actually accessible and understandable for every stakeholder.

Then there's the issue of assessing and measuring impact. Existing tools are driven by academic institutions or universities, but they're not really relevant to companies or organizations. So in most cases, companies need to come up with their own impact criteria in order to measure how their technology is able to deliver a positive impact at every stage of development, from research up to deployment.

If you had to give one tip to entrepreneurs to help them make AI projects human rights-friendly, what would it be?

Try to define the community which expresses your inner child and become hugely passionate about fixing the problem of this community.

What can we expect to see from you in the future?

In addition to my work with the EU on the future of learning and neurodiversity, I'm currently working on a range of initiatives to reshape technology, accessibility and ethics, specifically in the healthcare, education, and occupational sectors.

Introducing ethics and inclusion to AI

So, AI ethics is a challenging field, but also a promising one. From Yonah’s work on representation, neurodiversity and human-centred technology, we can see that there is work being done to ensure that teams developing AI are diverse and representative.

However, there is much still to be done, particularly in the area of corporate AI ethics and the diversity of AI development teams. As Yonah explains, fixing algorithmic bias does not mean fixing algorithms - it means changing the culture of your teams, applying the right frameworks to data science, machine learning models and AI systems alike.

This involves not just a different way of thinking and vocalizing AI issues, but also improving training so that AI developers know how the data sets and models they use actually work, the knock-on effects they can have and how to avoid them.

AI, and the use of AI for business, is growing at a rapid rate, but the technology ecosystem and those who work within it are still getting to grips with inclusive development in AI.

Initiatives like Yonah’s, and many others, are helping to change that, and highlighting that the AI investments we make today - whether in research, regulation or product development - will ultimately be what defines the world we live in tomorrow.

If we want to help eradicate the inequality we see in the world right now - and we should - we need to think carefully about how AI systems are designed and deployed - as well as who helps to design and deploy them.

Photo of Dominika Błaszak

More posts by this author

Dominika Błaszak

Growth Specialist
How to build products fast?  We've just answered the question in our Digital Acceleration Editorial  Sign up to get access

We're Netguru!

At Netguru we specialize in designing, building, shipping and scaling beautiful, usable products with blazing-fast efficiency
Let's talk business!

Trusted by:

  • Vector-5
  • Babbel logo
  • Merc logo
  • Ikea logo
  • Volkswagen logo
  • UBS_Home