Facial Recognition Doesn’t Like You

Technology is not made thinking on all of us

Photo by h heyerlein on Unsplash

One of the things I love the most is reading articles about what’s coming next technologically speaking. I enjoy digging in topics such as responsible tech, AI and social justice.

For me there’s no other way to be informed about how these topics are progressing. Many of these news seem they come from science fiction novels and it’s scary to realize how close we are of utopian realities. In my opinion, your location defines your higher or lower exposure to AI, how it works and its links to social justice’s awareness.

One of the last articles I read was written by Eva Botkin-Kowacki about how to remove bias from facial recognition. Currently we’re using this technology in our daily life like beauty, entertainment and in more troublesome tasks like police profiling.

Recently Netflix released the documentary Coded Bias from Shalini Kantayya where Joy Boulamwini, Cathy O’Neil, Meredith Broussard and many more technologists explain the dangers of a bad use of an inaccurate use of facial recognition.

The film explains how our movements are being filmed day by day. As citizens we’ve been informed that this capturing images are kept for our safety in order to prevent criminal actions. And we believe it with our eyes closed. We don’t question who owns these recordings and what they do with them afterwards.

We aren’t able to see the troubles that these kind of deployments affect to certain population of our society. It’s not new that technology is not designed for everyone and facial recognition is not an exception.

We know it has problems to detect people of color. And I wonder how this started! I’m pretty sure the cameras we used in the past (during the 80s and 90s) weren’t adapted to our skin color and almost nobody did something to change the situation. And that’s why we’re still dragging this worrying scenario.

One of the first things that come to our mind when talking about these biases is to think the dataset is not representative of our society. And when we talk about solutions we tend to hear that a bigger one (which means more investment and therefore more money) would help to show a better representation of our population, but this approach is not the only one to take into account.

Although larger amounts of data might help reduce false positives, we should have a closer look to the labels that are being used. For example, in the article, Eva gives the example of associating blond hair only to white people. Nowadays, anyone can have blond hair. Linking this trait to one kind of people in 2021 is very simplistic.

The first step for a better use of facial recognition is acknowledging:

  • We are more mixed and complex than we think.
  • The difficulty of detecting someone based on generalizing our features.

We must also recognize that the pipeline is biased because usually the people who develop and deploy them are biased too. The profile of the people who work on them is always very similar: white CIS men usually with a poor understanding of how our society works. Companies tend to rely on their skills and expertise and don’t think on the consequences that a bad deployment of their product might have on their revenue and reputation.

Some companies are trying to fight against this by providing DEI sessions to their employees. Others are using AI ethic tools like Fairlearn that determine how fair your algorithm is based on certain parameters.

Sometimes, the solution is more simple than it seems. We can diversify our workforce and for sure we will all benefit from it. We will be able to see more people who look more diverse and we will inspire others to join the tech industry. We need more people of color, from different incomes, genders and religions (just to name few intersectionalities).

Another answer to dismantle biases could be creating an open environment in our teams to talk and discuss these kind of topics. The working group should be able to speak out in case they detect any concern about how the pipeline is performing or how it’s being designed.

This last idea requires the action of HR. They must be taught how to deal with their own hiring biases and foster new ways of recruiting. For example:

What kind of people are being attracted by our job descriptions?

What kind of referrals are we receiving? Are they diverse? If not, why not?

Another thing that companies can do is explain to a wider audience how their algorithms work. We should clearly advocate for transparency in order to make sure our rights are being respected. It can only be done if we understand what kind of data is being collected from us and the parameters and hypothesis they use to later on deploy an algorithm in our society.

I hope that with the appearance of Shalini Kantayya’s film many more doubts will surface about the correct and also incorrect performance of facial recognition. It’s very important that we, as individuals, can fight against these social inequalities before it’s too late. Because in one way or another we can be harmed because of our unconscious bias.

Comments (

)