"Better a fallen rocket than never a burst of light."
~ Tom Stoppard, The Invention of Love

Friday, December 3, 2021

Coded Bias (2020)

Depending on the subject and slant, documentaries can be a little like a supersized episode of Last Week Tonight, in that many are an in-depth treatment of “That Thing You Like is Bad.” While I was no great proponent of facial-recognition software and already knew a decent amount about how troubling its usage could be (in fact, I learned a lot from a Last Week Tonight piece on the subject,) this film dives into the myriad ways it can be very, very bad.

The film kicks off with Joy Buolamwini, who set out to design a fun program for an MIT assignment and quickly hit a snag when she discovered that the facial-recognition software she needed to use couldn’t read her face. What began as a neat coding idea instead grew into extensive study on racial bias in facial recognition, as well as campaigning to remove this inaccuracy-prone software from the many places it encroaches on our lives.

Almost anytime a fun new app or much-touted piece of law-enforcement technology that relies on facial recognition launches, it doesn’t take long before the exact same controversies start arising. The technology can’t see darker-skinned faces, or else it can’t distinguish between them with reliable accuracy. Obvoiusly, having trouble with a SnapChat filter or being unable to unlock your phone with your face isn’t as serious as being arrested due to getting falsely identified by an algorithm, but the issues stem from the same faulty technology. Clearly, these companies aren’t interested in proactively solving this problem, because it keeps cropping up again and again.

Facial recognition used in more “official” capacities, like law enforcement, are of course of greatest concern. People have a tendency to place trust in technology as neutral, uncorruptible, bias-proof. But as the documentary repeatedly hammers home, technology is designed by humans, and humans have biases. The algorithms to recognize faces are fed data by mostly-white programmers who don’t notice that they’re programming their biases into their tech, and the mostly-white workers quality-testing those programs don’t pick up on it either. And so, the technology is heralded as the future, getting incorporated into more and more aspects of society in ways that can have an impact on people’s lives.

There are some really interesting things here. The AI that was introduced to Twitter and, within hours, was parroting Nazi propaganda. The pilot program to identify residents of apartment buildings, which the landlord then used to retaliate against tenants he didn’t like. Test sites in the U.K. that build a repository of citizens’ faces, where the police stop and question people who understandably don’t want their face in a police database and obscure their face from the cameras.

One thing that really struck me was the pushback on the idea that technological advances start among the wealthy and then gradually trickle down to those of all social classes. And for nice toys and gadgets, that might be true. But when it comes to technology as surveillance, like facial recognition, it often comes to poorer neighborhoods first, ways to police an already-overpoliced population at less expense but high probability for error.

Subjects like this can quickly leave you with a creeping sense of paranoia, and the film definitely does that. But fortunately, it also shows positives, like watching Joy and others testifying before Congress about how these faulty programs shouldn’t be used to surveil private citizens without their knowledge. I appreciate seeing how people from a wide range of experiences come together to fight this issue, working collaboratively to come up with solutions and advocate for them.

Warnings

Language and thematic elements.

No comments:

Post a Comment