Facing up to Challenges in Facial Recognition
How can we use this powerful technology for good while ensuring it is not abused by opportunistic companies?
By Xische Editorial, July 21, 2019
From companies using OKCupid photos to train facial recognition software to immigration services in the United States using drivers licence photos to located undocumented migrants, we have entered a moment of serious debate about facial recognition. How can we use this powerful technology for good and ensure that it is not abused by opportunistic companies?
While the bulk of the discussion might focus on the dark side of facial recognition software, the fundamental concern is data privacy and safety. That is something we all have a stake in. Let’s start with the basics: facial recognition software on its own is not nefarious. Around the world, facial recognition is used to carry out security tasks and aid companies as well as governments.
When the terms of use are clear or the organisation using the programme operates in the public interest, it is hard to dismiss facial recognition as a dangerous technology. It’s the most advanced form of biometric technology and biometrics are part of our daily lives. From shopping malls to doctors’ offices, we do not have a problem using our biometrics to pay, access documents, and register for services. We do all this without much thought about our privacy because we trust in the systems that control, store, and use the information.
Facial recognition technology is improving. We use facial recognition to unlock our iPhones. A Chinese AI startup is using facial recognition of dogs to track lost animals through their nose prints. The possibilities are endless.
The challenge sits with regulation and that is one of the defining issues of our time. When it comes to facial recognition software and its associated uses, the specific challenge is two fold. First, companies operating facial recognition software cannot be allowed to use private images to train their platforms. This is the case unfolding with Okcupid, a dating website whose images have been used for years by facial recognition startups. The only body with the power to protect consumers in this area is the government. It is time to see international action.
The other challenge is more subtle and arguably difficult to control. Leading facial recognition platforms have been criticized for their high error rates. Take the case of Amazon’s Rekognition software. The Verge reports that leading AI researchers from around the world have called on Amazon to stop selling the software because of its high error rates. “In an open letter,” The Verge reported in April, “AI researchers say studies have repeatedly shown that Amazon’s algorithms are flawed, with higher error rates for darker-skinned and female faces. The researchers say that if such technology is adopted by the police, it has the potential to amplify racial discrimination, create cases of mistaken identity, and encourage intrusive surveillance of marginalized groups.”
The power of private corporations to yield unchecked influence over facial recognition software is clear here. Governments, on both the city and national level, should embrace their mantel to operate in the public interest by pushing back on this issue. We have seen this dynamic at play in the US, where cities such as Oakland, California are refusing to use Amazon’s facial recognition software until it improves on error rates. This is exactly the type of push required to ensure that problems are fixed and the public is not at risk.
As with many technology challenges these days, the solution is not all that dramatic. Governments have the power to regulate and protect consumers. They need to use that power in a cautious manner so as not to stifle innovation. In some cases, this requires that lawmakers study up on the technology, but the amount of legwork needed is not outlandish. Once the right balance is struck, however, the full potential of facial recognition to improve society will be realised.