Ethics and Inclusion in AI - Interview with Yonah Welker

Dominika Błaszak

Aug 3, 2021 • 13 min read
Yonah Welker Interview

Technology and society have traditionally been seen as separate entities. But with the rise of AI, the world is becoming increasingly interconnected.

In addition to making our lives easier, artificial intelligence has the power to help us address some of the most urgent challenges facing our world, from enabling disaster awareness and prediction to alleviating global hunger and offering clean energy solutions. The future is bright.

But how do ethics and inclusion fit into our understanding of AI? If we don't proceed with caution, or miscalculate, AI could have unexpected consequences for the entire world. For the sake of future generations, it is our duty to deploy AI in a safe and responsible way while representing the interests of everyone in society – not just the privileged few.

We caught up with Yonah Welker, a technologist and explorer focused on problems of accessibility, algorithmic diversity, and policy, to hear about their human-centred AI and ethics project, developed in response to the lack of existing frameworks on the intersection of AI and human rights.

Dominika Błaszak: Could you tell us a bit about your latest project and how it started?

Yonah Welker: A year ago, I was asked to judge startups on a range of ethics criteria at the ISDI Impact Accelerator in Spain, and what we found was that, in contrast to policymakers, companies actually had no idea about these criteria. They still don't have access to open source knowledge and guidelines on how to create, for instance, robots aligned to human rights, neurodiversity or gender.

So we created this project entitled Human-centered AI and Ethics (EU), which is focused on technology, accessibility, and ethics, including women's, children's, and disability rights. The project is also known as 'smart factories' and it aims to provide human-centred technology across factories in Europe. The first pilot will involve huge organizations like Volkswagen, Bosch, and Siemens. So it's an opportunity not just to advance our academic understanding of ethics, but actually integrate these frameworks into the manufacturing process.

What is the project's aim?

On the one hand, I want this project to provide companies with a general understanding of AI, ethics, and responsible development. But I also want to start a discussion on the next stage of AI, such as how we can make technology accessible for disabled people and solve the underrepresentation of women in the workplace.

It's about making AI more aligned, because in most cases it's impossible to create products for women or disabled people if we have no such people on our boards or in our research teams.

Should there be international regulations on AI?

We are currently seeing a rise in many community-focused policies and guidelines. For example, in Europe, the Women in AI group tends to focus more on climate change and health care, but in the United States, they focus more on social justice, equality, and human rights. So, when we try to introduce technology in different countries, we face different challenges depending on the communities we reach.

On the one hand, we should create these policies, but at the same time make them in a very democratic, decentralized way, so that all of the agents of the market not only policy makers, but researchers, innovators and entrepreneurs are able to participate in the discussion in order to share their feedback, because that's what human-centered design is. It's about constant human feedback, so that all stakeholders and all agents of the market are able to share their opinion.

Why is it so hard for women to be in this specific sector, to the point that they feel they need a special community only for women?

Nowadays we typically try to talk about inclusion, but that's not the key issue. The biggest issue is that our vision of the world is still based on a post-colonial regime. Most technologies, institutions and universities were created by a particular type of white male mindset, to simplify it. Most of the ecosystems, like Silicon Valley, were created in this way, and we've exported this model everywhere.

Up until now, all of our industries have been driven by people who just want to make money. As a result, most of the startups we have today are just unicorns which have no value whatsoever for society. They only have value for a particular type of person.

If you go to female-driven hackathons, they cover topics like health care, justice, and quality education. But if you go to the male ecosystems, they tend to explore much more money-oriented topics like finance and insurance.

So how can we make AI more inclusive?

We have whole global companies driven by one single mindset and language. We've created some kind of truth where Silicon Valley or venture capital is seen as good. But maybe science is good, maybe empathy is good, maybe quality is good.

Until we are able to deconstruct this monopoly of vision, we won't be able to solve many problems. Until we are able to create competition within ecosystems where we have diverse stakeholders, we won't be able to share how these groups can drive innovation, not just revenue. People are now trying to reshape the entire ecosystem from scratch, because they believe it's impossible to try to fit into a system that excluded them from the beginning.

What does human centricity really mean?

Human centricity is our ability to involve human feedback or stakeholder feedback at every phase of development. We typically consider human centricity as more of a development or research technique, but the problem is that this doesn't work until we make the whole organization human-centric.

I recently started working on a human-centric protocol for a startup. One of the fundamental foundations of this vision is coming up with cultural and accessible moral vocabulary and embedding the ethical framework at every stage of the organization. For instance, if your developers know about your ethical considerations, but your salespeople have no idea, the message on your landing page will never be correct. It's about the whole cycle.

What are the implications of prioritizing technical knowledge over cultural knowledge?

While people with a technical background currently lead the market in terms of the salaries and respect we give them, we're not able to create a product if we're not actually able to understand customers, both from a cognitive and cultural perspective. That's why I attach significant importance to people who are able to provide high-quality ethnic and gender research, like social scientists.

The bigger problem here is that we teach our kids how to understand the market instead of how to understand the world. We create a kind of market of dilettantes who have a pitch deck and statistics, but we have no idea what they're talking about.

A brilliant example of how to be a successful dilettante is Elon Musk. We have some rich white male who always has an opinion on everything, who knows how to fix the world or how to make our oceans clean in just a few days. But without the right knowledge or education, you create companies which just deliver zero value for people.

How can we move away from this market of dilettantes, as you put it, and create companies which do deliver value for people?

In the UK, there's an extremely successful new type of startup accelerator. It's called Entrepreneur First and in just a few years they've attracted USD 130 million because they've started using people with a PhD to actually solve problems. They've realized that our key assets are our brains, our empathy and our talent, not just some kind of abstract ambition.

So in human-centered design, we need to actually have people who know about gender studies, social studies, and cultural studies, who can create this deep type of research for developers and work with them to produce ethical products that actually solve the problems of minorities and disabled people.

At the moment, companies claim to be ethical, but it's hypocrisy and I really don't like it. That's why in my work I really try to rely not on the people who have just started to care, because I believe that they still don't care. They just don't want to lose their markets or their reputation.

How can we support, not replace, humans in the shift to AI?

So when we talk about human-centered technology, AI, and ethics, we talk about ethical frameworks, transparency, and accountability. And as long as we continue to make people accountable, our world will remain human-centric, because robots are still just a tool. They're never a replacement for humans. The main reason why people as a species have evolved and become the center of our world is their ability to deal with tools, build communities, and be creative.

How can we minimize the dangers from the supremacy of algorithms, black boxes, and biases?

First of all, fixing algorithmic bias does not mean fixing algorithms, but actually fixing the culture of our organization, which as I mentioned before involves creating balanced companies.

Another problem is the black box. Currently open source has become a huge thing, and I'm a huge advocate of it, but at the same time we have kind of a decrease in competence and knowledge. In most cases, people work with the data sets or libraries but have no idea how they work. Bridging the gap between access, skills, knowledge, and education is a huge problem behind the black box.

There's also the problem of technical fixes. Say if there's fake news on a platform: we tend to just create another feature in our backlog, deploy it, and potentially create even more bias than we had before. This is how many platforms and technology companies work and it's a typical situation because it's created by developers. We have no idea how to solve something in a different way because we believe that developers are just the kings of this world and that everything should be simply fixed by an update. But the whole vision behind this is completely wrong.

How can developers avoid bias when developing AI solutions?

First, there are some brilliant frameworks, such as those produced by Deloitte or the Turing Institute. Depending on your niche vertical, find a template and use it as a starting point.

Second, start to reshape your culture and your team. Come up with a vocabulary related to your niche and make sure you understand digital citizenship, because these days everyone is a blogger. Then, it's about introducing the vocabulary, actually using it, understanding it, and practising it at every stage of development.

After that, we can introduce all the elements of the ethical framework, including the principle of discriminatory non-harm and the principle of accountability. It's about trying to implement accountability by design and ultimately auditing the black box.

What are some of the other current issues with AI?

Another problem in AI is the inability to deliver outcomes to different stakeholders.

You need to create a transparent system where you can interpret algorithms, results, and outcomes in different interfaces, depending on the type of internal and external stakeholders.

We use both numbers, data, and visual representation, which helps us to avoid black boxes and make the output actually accessible and understandable for every stakeholder.

Then there's the issue of assessing and measuring impact. Existing tools are driven by academic institutions or universities, but they're not really relevant to companies or organizations. So in most cases, companies need to come up with their own impact criteria in order to measure how their technology is able to deliver a positive impact at every stage of development, from research up to deployment.

If you had to give one tip to entrepreneurs to help them make AI projects human rights-friendly, what would it be?

Try to define the community which expresses your inner child and become hugely passionate about fixing the problem of this community.

What can we expect to see from you in the future?

In addition to my work with the EU on the future of learning and neurodiversity, I'm currently working on a range of initiatives to reshape technology, accessibility and ethics, specifically in the healthcare, education, and occupational sectors.

More posts by this author

Dominika Błaszak

codestories