https://www.democracynow.org/2020/1/30/coded_bias_shalini_kantayya_joy_buolamwini
“Coded Bias”: New Film Looks at Fight Against Racial Bias in Facial
Recognition & AI Technology
Story January 30, 2020 [image: Watch icon]Watch Full Show
<https://www.democracynow.org/shows/2020/1/30?autostart=true>

This is viewer supported news. Please do your part today.
Donate <https://www.democracynow.org/donate>
Topics

   - Big Tech <https://www.democracynow.org/topics/big_tech>
   - Sundance Film Festival
   <https://www.democracynow.org/topics/sundance_film_festival>

------------------------------
Guests

   - Joy Buolamwini
   <https://www.democracynow.org/appearances/joy_buolamwini>
   researcher at the MIT Media Lab and founder of the Algorithmic Justice
   League. She is featured in the documentary *Coded Bias* that just
   premiered at the 2020 Sundance Film Festival.
   - Shalini Kantayya
   <https://www.democracynow.org/appearances/shalini_kantayya>
   director of the documentary *Coded Bias*, which premiered at the 2020
   Sundance Film Festival.

------------------------------
Links

   - Joy Buolamwini on Twitter <https://twitter.com/jovialjoy>
   - Shalini Kantayya on Twitter <https://twitter.com/shalinikantayya>

A new documentary looks at the dangers of artificial intelligence and its
increasing omnipresence in daily life, as new research shows that it often
reflects racist biases. Earlier this month, Cambridge, Massachusetts,
became the latest major city to ban facial recognition technology, joining
a growing number of cities, including San Francisco, to ban the artificial
intelligence, citing flawed technology and racial and gender bias. A recent
study also found that facial recognition identified African-American and
Asian faces incorrectly 10 to 100 times more than white faces. The film
“Coded Bias” begins with Joy Buolamwini, a researcher at the MIT Media Lab,
discovering that most facial recognition software does not recognize
darker-skinned or female faces. She goes on to uncover that artificial
intelligence is not in fact a neutral scientific tool; instead, it
internalizes and echoes the inequalities of wider society. For more on the
film, we speak with Joy Buolamwini, a researcher who uses art to raise
awareness on the implications of artificial intelligence and is featured in
the documentary “Coded Bias,” which just premiered at the 2020 Sundance
Film Festival. We also speak with Shalini Kantayya, director of “Coded
Bias.”
------------------------------
Transcript
This is a rush transcript. Copy may not be in its final form.

*AMY GOODMAN:* This is *Democracy Now!*, democracynow.org, *The War and
Peace Report*. I’m Amy Goodman, with Nermeen Shaikh. And we’re broadcasting
from the Sundance Film Festival in Park City, Utah, from Park City TV,
where a new film looks at the racial and gender prejudice baked into
artificial intelligence technology, like facial recognition. The film is
called *Coded Bias*.

*NERMEEN SHAIKH:* Earlier this month, Cambridge, Massachusetts, voted to
ban facial recognition, joining a growing number of cities in the U.S.,
including San Francisco, that have outlawed the artificial intelligence
software, citing flawed technology.

*AMY GOODMAN:* A recent study found facial recognition identified
African-American and Asian faces incorrectly 10 to 100 times more than
white faces. The study by the National Institute of Standards and
Technology found a photo database used by law enforcement incorrectly
identified Native Americans at the highest rates.

*NERMEEN SHAIKH:* The danger of flawed artificial intelligence and its
increasing omnipresence in daily life is the focus of the new film *Coded
Bias*. The film begins with Joy Buolamwini, a researcher at the MIT Media
Lab, who discovers that most facial recognition software does not recognize
darker-skinned or female faces when she has to wear a white mask to be
recognized by a robot she herself is programming. She goes on to reveal
that artificial intelligence is not in fact a neutral scientific tool, but
instead reflects the biases and inequalities of wider society.

*AMY GOODMAN:* This is Joy Buolamwini testifying before Congress in May.

*JOY BUOLAMWINI:* I’m an algorithmic bias researcher based at MIT, and I’ve
conducted studies that show some of the largest recorded racial and
skin-type biases in AI systems sold by companies like IBM, Microsoft and
Amazon. You’ve already heard facial recognition and related technologies
have some flaws. In one test I ran, Amazon recognition even failed on the
face of Oprah Winfrey, labeling her male. Personally, I’ve had to resort to
literally wearing a white mask to have my face detected by some of this
technology. Coding in white face is the last thing I expected to be doing
at MIT, an American epicenter of innovation.

Now, given the use of this technology for mass surveillance, not having my
face detected could be seen as a benefit. But besides being employed for
dispensing toilet paper, in China the technology is being used to track
Uyghur Muslim minorities. Beyond being abused, there are many ways for this
technology to fail. Among the most pressing are misidentifications that can
lead to false arrest and accusations. … Mistaken identity is more than an
inconvenience and can lead to grave consequences.

*AMY GOODMAN:* That’s Joy Buolamwini, who now joins us here in Park City at
the Sundance Film Festival, along with Shalini Kantayya, the director of
the film *Coded Bias*, that’s premiered here at the film festival.

We welcome you both to *Democracy Now!* So, take it from there, Joy. I
mean, how did you end up testifying before Congress? And take us on your
journey, from MIT, discovering that your face is one that would be
recognized so many fewer times when artificial intelligence technology is
used than others. I mean, maybe that’s protection. Who knows?

*JOY BUOLAMWINI:* Absolutely. So, my journey started as a grad student. I
was working on an art project that used face detection technology, and I
found that it didn’t detect my face that well, until I put on a white mask.
And so, it was that white mask experience that led to questioning: Well,
how do computers see in the first place? How is artificial intelligence
being used? And if my face isn’t being detected in this context, is it just
me or other people?

*AMY GOODMAN:* Can you also step back? What even does artificial
intelligence mean? What does AI mean?

*JOY BUOLAMWINI:* Sure. So, AI is about giving machines what we perceive to
be somewhat intelligent from a human perspective. So, this can be around
perceiving the world, so computer vision, giving computers eyes. It can be
voice recognition. It can also be about communication. So, think about
chatbots, right? Or think about talking to Siri or Alexa. And then, another
component to artificial intelligence is about discernment or making
judgments. And this can become really dangerous, if you’re deciding how
risky somebody is or if they should be hired or fired, because these
decisions can impact people’s lives in a material way.

*NERMEEN SHAIKH:* Well, can you talk about the origins of artificial
intelligence? You go over it a bit in the film *Coded Bias*.

*JOY BUOLAMWINI:* Yes. And Shalini does a great job of really taking it all
the way back to Dartmouth, where you had a group of who I affectionately
call “pale males” coming together to decide what intelligence might look
like. And here you’re saying, “If you could play chess well, that’s
something that looks like intelligence.” The thing also about artificial
intelligence is what it is changes. So, as machines get better at specific
kinds of tasks, you might say, “Oh, that’s not truly intelligence.” So,
it’s a moving line.

*AMY GOODMAN:* So, Shalini, why don’t you talk about how you came up with
the idea for *Coded Bias*, Joy a central figure, of course, of this film,
and take the history further?

*SHALINI KANTAYYA:* Well, basically, I was sort of like a science fiction
fanatic. And so I like reading about technology and imagining the future.
And I think so much of what we think about artificial intelligence comes
from science fiction. It’s sort of the stuff of *Blade Runner* and *The
Terminator*. And then, when I started sort of reading and listening to TED
Talks by Joy and another mathematician named Cathy O’Neil, other women like
Meredith Broussard and Zeynep Tufekci, I realized that artificial
intelligence was something entirely different in the now. It was becoming a
gatekeeper, making automated decisions about who gets hired, who gets
healthcare and who gets into college. And when I discovered Joy’s work, I
was just captivated by this young woman who was disrupting the disruptors.

*AMY GOODMAN:* So, let’s go to a clip from your remarkable film, *Coded
Bias*. This shows police in London stopping a young black teen.

*SILKIE:* Tell me what’s happening.

*GRIFF FERRIS:* This young black kid’s in school uniform, got stopped as a
result of a match. Took him down that street just to one side and like very
thoroughly searched him. It was all plainclothes officers, as well. It was
four plainclothes officers who stopped him. Fingerprinted him after about
like maybe 10, 15 minutes of searching and checking his details and
fingerprinting him. And they came back and said it’s not him.

Excuse me. I work for a human rights campaigning organization. We’re
campaigning against facial recognition technology. We’re campaigning
against facial — we’re called Big Brother Watch. We’re a human rights
campaigning organization. We’re campaigning against this technology here
today. And then you’ve just been stopped because of that. They
misidentified you. And these are our details here.

He was a bit shaken. His friends were there. They couldn’t believe what had
happened to him.

Yeah, yeah. You’ve been misidentified by their systems And they’ve stopped
you and used that as justification to stop and search you.

But this is an innocent, young 14-year-old child who’s been stopped by the
police as a result of a facial recognition misidentification.

*AMY GOODMAN:* So, that’s a clip from *Coded Bias*. Joy Buolamwini, explain
further what took place here, the misidentification, the identification.
Some might perversely say it’s better for this technology to fail, so that
people can’t be identified, but this is the opposite case.

*JOY BUOLAMWINI:* Absolutely. So you were saying earlier maybe not being
identified is a good thing. But then there are the misidentifications that
have a real-world impact. So, in the clip and in the film, you actually see
the work of Big Brother Watch U.K. And in this particular scenario, Big
Brother Watch U.K. was able to track what was going on in London. And one
of the things they showed in their study
<https://bigbrotherwatch.org.uk/all-campaigns/face-off-campaign/>, “Face
Off,” was you had false positive match rates of over 90%. So you see this
one example here, but they also had reports where more than 2,400 innocent
people were mismatched. So it’s not just a case of, “Oh, you’re not
detected.” That might be sometimes. But you could be misidentified as
somebody you’re not, and the consequences can be grave.

*AMY GOODMAN:* And we’re playing this clip at a time when *The New York
Times* reports London’s Police Department — London’s Police Department said
it would begin using facial recognition to spot criminal suspects with
video cameras as they walk the streets, adopting a level of surveillance
that is rare outside China, the technology London deploying goes beyond
many of the facial recognition systems used elsewhere, which match a photo
against a database. The new technology uses software that can immediately
identify people on a police watchlist as soon as they’re filmed on a video
camera, Joy.

*JOY BUOLAMWINI:* And I think you might need to say “attempt to identify,”
because oftentimes the claims that are made about these technologies don’t
necessarily match up to the reality. Earlier you spoke about the National
Institute of Standards and Technology study. They studied more than 189
algorithms from 99 different companies. And so, this is the majority of the
facial recognition technology that’s out there — racial bias, gender bias,
age bias, as well. So, if you have a face, you have a place in this
conversation, and we all need to be concerned. So I think it’s highly
irresponsible to deploy technologies that we already know have significant
flaws, that we already know can be abused. It’s commonsense to place a
moratorium until we’re at a better place.

*NERMEEN SHAIKH:* Well, Shalini, another place that you profile in the
documentary is China. And you speak to this woman at some length. So, a
couple of questions. First, how did you get access? And your response to
the fact that she actually supported the credit — what is it? The social
credit system?

*SHALINI KANTAYYA:* Absolutely.

*NERMEEN SHAIKH:* If you could explain what that is, how it works there and
what your sense is of the kind of support that this system has in China?
And then, Joy, along the same lines as what you were talking about earlier,
in places like China, where the artificial intelligence and facial
recognition, the technology is developed there, is there a similar bias?
And if so, what is it? But first, Shalini.

*SHALINI KANTAYYA:* Well, I got access through a local production company
in China. And I feel that this woman kind of gave us insight into this
social credit system that is coming up in China, to sort of where they’re
using facial recognition in tandem with the social credit system. So, if
you — basically, they’re tracking you, they’re watching you, they’re
surveilling you, and they’re scoring you. And not only what you do impacts
your score, but what your friends do impact your score. And this young
woman, who I — who is featured in the film, says that, you know, in fact,
we don’t have to trust our own senses anymore, that we can rely on this
sort of social credit score to actually have integrity in who we trust and
who we don’t trust. And I think, in the film, you know, we sort of want to
think, “Oh, that’s sort of a galaxy far, far away from the U.S.” But in the
making of this film, I saw all kinds of parallels of that type of scoring
that’s happening here in the U.S. and in other places around the world.

*NERMEEN SHAIKH:* Explain how you see that it’s comparable or could be.

*SHALINI KANTAYYA:* Well, as Amy Webb says so poignantly in the film, we’re
all being scored all the time, from our Uber scores to our Facebook likes.
All of that information is being tracked and analyzed all of the time. And
so we’re all being rated all of the time. And so, that kind of tracking can
impact how much we pay for insurance, what kind of opportunities are shown
to us online. And so, very much it becomes sort of an algorithmic
determinism.

*AMY GOODMAN:* And Joy?

*JOY BUOLAMWINI:* So, to the question of how are the systems working in
China, in our first study <http://gendershades.org/>, called “Gender
Shades,” we looked at IBM, Microsoft, but we also looked at Face++, a
billion-dollar tech startup in China. And we found similar racial bias and
gender bias. But, overall, when they’ve done studies on AI systems
developed in China, they tend to work better on Chinese faces, right? And
those developed in Western nations tend to work better on Western faces, as
well.

One thing I did want to bring up related to China and data collection is
this data colonialism that we’re starting to see. So you have reports of
Chinese companies going to African nations, providing facial recognition or
surveillance technologies in exchange for something very precious, the
biometric data of the citizens. So, now, parallel to what we had with the
slave trade — right? — where you’re extracting bodies, now you’re
extracting digital bodies in service of a global trade, because even when
you talk about what’s going on in London, they’re using technology from a
company called NEC that’s based in Japan. And so you have to really think
about the global context for how these technologies spread around the world.

*SHALINI KANTAYYA:* And just to add to that, China has unfettered access to
data. It has now been mandated that if you want to access the internet in
China, you must submit to facial recognition. So, that is the basis for
which they’re building this kind of scoring system.

*AMY GOODMAN:* Shalini, as we wrap up, what about regulation?

*SHALINI KANTAYYA:* These algorithms are impacting all of us in the most —
in our civil rights, and we need legislation. We need meaningful
legislation around algorithms.

*AMY GOODMAN:* And the explanation of algorithms, in just 20 seconds, Joy,
for us nonscientists?

*JOY BUOLAMWINI:* Yes. So, Algorithms are essentially processes that are
meant to come to give or solve a particular task. So, when we talk about
AI, we’re talking about systems that can perceive the world, that can
communicate and, most importantly, make determinations. And these
determinations impact our lives.

*AMY GOODMAN:* Well, we want to thank you so much for being with us. Joy
Buolamwini is researcher at the MIT Media Lab and founder of the
Algorithmic Justice League. We’re going to link to her speeches
<https://www.ted.com/speakers/joy_buolamwini> and her congressional
testimony
<https://oversight.house.gov/legislation/hearings/facial-recognition-technology-part-1-its-impact-on-our-civil-rights-and>
at democracynow.org. And Shalini Kantayya is director of the new film,
that’s just premiered here at the Sundance Film Festival, called *Coded
Bias*.