IBM, Microsoft and Amazon announced this week that they will suspend selling their face recognition technology to law enforcement agencies. It’s yet another sign of the dramatic social justice impact of the protests.
But the moves of the tech giants also illustrate AI’s inherent risks, particularly when it comes to bias and the potential for privacy invasion. Notice that there are signs already that Congress will be taking steps to control the technology
Due to advances in deep learning and faster systems for processing huge amounts of data, facial recognition has undoubtedly seen major steps in the last decade. Yet much remains to be done.
“AI face recognition technology is pretty good, but not very robust,” Ken Bodnar, an AI researcher, said.
This means the neural network is well trained and capable of impressive recognition features, but it misidentifies you when there is one small parameter off it. The way it works is that with AI it is probability. And it has a set of proprietary algorithms and parameters that it tests when looking at a face. Deep Belief Networks are the most accurate AI devices that win features such as double chins, eye size, type of hair, bushy eyebrows, fat lips, age parameters, etc. But the categorization ‘not-very-robust’ means it’s easy to fool because of the intrinsic nature of the way neural networks work.’
True, the accuracy issues may not necessarily be a big deal when it comes to certain facial recognition applications (such as with a cool social media app). But of course, when it comes to whether someone should be arrested this is a different matter.
Facial recognition has also proved less effective when analyzing minority videos and images. “As for the issues with this technology, an MIT report last year found that all the facial recognition systems had significant problems recognizing the color of people,” said Michal Strahilevitz, who is a marketing professor at St. Mary ‘s College, California. “Another report from the U.S. National Institute of Standards and Technology showed software for facial recognition had many more mistakes in trying to identify black and Asian faces than in recognizing Caucasian faces. That means that black and brown people are more likely to be identified inaccurately, and therefore unfairly targeted.
Nevertheless, the debate on facial recognition can certainly become complicated and may even lead to unintended consequences.
“Demonstrations reflect a lack of common understanding of technology – the public combines face recognition with body recognition and monitoring, facial mapping, facial identification, recognition of gender/age/ethnicity, biometric validation, etc., as well as misunderstanding the difference between use case and technology,” Forrester analyst Kjell Carlsson. “It is very unclear exactly what is being renounced and what use are cases for. The result is almost certainly the worst of both worlds: ineffective policy to avoid misuse along with the use of breaks on valuable use cases.
Exactly what is being renounced and for what use are cases is very unclear. The result is almost certainly the worst of both worlds: ineffective policy to avoid misuse along with the use of breaks on valuable use cases.
For instance, wouldn’t we want folks to use facial recognition to help identify victims of abduction in Nigeria? Or to help identify unusual genetic conditions for the hereditary phenotyping of children with facial analysis? Or help catch bomber from the Boston Marathon? Or to the false positives in the methods used by the police to fit faces to mugshots they have used for decades? It’s interesting to note that technology for face recognition is likely to continue seeing more innovation and development.