Face recognition: Real 3D versus academic 2D
Almost daily we hear about face recognition. From the contested accuracy of Apple’s FaceID on its latest iPhone to testing by NIST or even apprehension at the ACLU, there are numerous active discussions. As a technologist and entrepreneur involved in the biometric space for over 20 years, these discussions often become simplified to the point of being potentially misleading. I feel compelled to weigh in because the waters have been somewhat muddied around the value that accurate face recognition really can provide in 2018.
For perspective, machine vision has been evolving for several decades. Many consider the father of facial recognition to be a mathematician named Woodrow Wilson Bledsoe who developed a system to classify photos of faces using what’s known as a RAND tablet nearly 60 years ago. This device let people input horizontal and vertical coordinates on a grid using a stylus that emitted electromagnetic pulses as a way to capture and register a facial profile.
Fast forward to 2018 and we are seeing lots of face recognition use cases gaining traction, from accessing mobile devices to identifying people by the police.
This is all good news, as face recognition — and particularly a more accurate version that provides true face authentication — has the ability to improve our daily lives without invading our privacy. What gives me pause is that many of these articles promoting a technology meet narrow and specific research-oriented criteria rather than demonstrating systems in real-world use cases. They often get great press, but are disconnected from how face authentication can deliver value along with preserving people’s privacy across a myriad of potential applications. When the in-field pilot trials fail, it generates a lot of confusion, mistrust and false perceptions. Just look at the recent ACLU test with Amazon’s online face recognition technology.
As another example, Shanghai-based company Yitu Tech recently achieved a notable success in what is called the Face Recognition Vendor Test, a competition conducted by NIST. While impressive based on the limited set of conditions tested, a careful read reveals the best test results are done using mugshots and visa pictures — under controlled lighting conditions and with views of tested faces. Even the “in the wild” data set is far more controlled from a lighting and position perspective than is possible with the view of a tested face in the real world.
These evaluations were conducted in a highly controlled environment that is great for academia, but this approach has limited predictive ability when it comes to trying to verify live people under many real-world conditions. For pragmatic simplicity, this type of testing does not take into account many critical factors that impact accurate face authentication results, such as lighting, motion artifacts and physical position relative to the camera to name just a few. For highly controlled conditions, such as recognizing someone at a passport machine, this might be fine.
In many respects, this testing is an exercise in “fun with numbers.” The results focus on false match non-match rate or FMNR, also known as the false rejection rate, which is of limited practical use given the fact that the tests were conducted in highly controlled conditions.
While some may get excited over these types of test results, I am passionate about the fact that in the real world, face authentication is poised to initiate a whole new paradigm for secure, frictionless and convenient interactions while creating practical new markets and supporting innovative applications. To do this, you need more than just great FAR (false acceptance rate) and FRR (false recognition rate) scores. You need technologies that work in all the environments that it is being asked to perform in. This often requires critical supporting technologies such as 3D recognition and authentication to accommodate the challenges of real-world conditions.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.