Image source: Unsplash/George Prentzas
The goal in the field of technological biometrics for IAM, then, is not to build systems that work as well as humans, but better.
No biometrics without errors
Biometrics include all modes of identification that depend on an individual’s anatomical and/or physiological features, such as their characteristic fingerprint, venous configuration, iris spots or voice, but also behavioral characteristics: how someone walks, types, and others.
Obviously, then, biometric factors can be quite complex. Pattern recognition algorithms can commit two types of errors in recognizing biometric features: the false accept and the false reject. In the first case, an unauthorized person is erroneously permitted access; in the second, an authorized person is erroneously rejected from the system. Typically, the fine tuning of a biometric system’s parameters can decrease the false acceptance rate at the cost of increasing the false rejection rate, making the system as a whole more restrictive, and vice versa. A high false acceptance rate compromises the system’s security while a high false rejection rate compromises user comfort and satisfaction. As both the false acceptance and false rejection rates are variable, the overall accuracy of a biometric system is measured by the crossover error rate, using the parameter set in which both errors are equal.
Biometric data: valuable and vulnerable
In addition to those errors that are inherent to pattern recognition systems, IAM systems based on biometric factors can be hacked and subverted like any IAM system. Unlike property or knowledge factors (such as ownership of a token or smartphone, or knowledge of a password or passphrase), compromised biometric factors have implications that reach far beyond the security of one single IAM system: A “stolen” biometric factor, such as an individual’s fingerprint, can never be replaced like a token or password. The individual will forever have to live with a biometric factor that does not conclusively prove his or her authenticity anymore.
What is more, biometric factors are usually personal data in a GDPR sense. The compromise of a repository of biometric data is therefore not only a threat to the safety and security of the system that uses these factors in its IAM, but can also be a substantial problem for the company as a whole, incurring legal liability for the breach of personal data.
Low-tech and high-tech attacks
Methods of hacking biometric authentication range from hands-on and low-tech to sophisticated high-tech. German information security specialist Jan Krissler, known to the infosec scene as “starbug”, subverted the iPhone 5 TouchID authentication function as early as 2013. Using a finger smudge from a smartphone surface, he reconstructed an artificial finger with fingerprints using conventional wood glue and graphene. A year later, he “stole” politician Ursula von der Leyen’s fingerprint using high-resolution photographs taken during an official event. His and his collaborators’ methods had evolved from using wood glue (which becomes unusable after a day) to replicating the fingerprint with more durable latex for later and repeated use.
Another authentication method that has become increasingly popular in recent years is facial recognition. Earliest systems could be bypassed by presenting a portrait photo of an authorized person. Systems were then improved with liveness detection: authentication was only granted if minimal movements proved that the face in front of the camera was alive. However, in 2014 Krissler and colleagues demonstrated how proof of liveness could be faked by just moving a pen over the facial portrait photo of an authorized person.
Easy access for average faces
This kind of attack requires knowledge about the identities of authorized persons, at least of one of them, and a corresponding facial portrait photo. However, researchers at Tel Aviv University recently developed a new kind of attack on facial recognition systems that works independently of this kind of information: They constructed a “master face” that has a relatively high likelihood of unlocking any facial recognition system.
The underlying principle: The machine learning algorithms behind facial recognition do not classify faces using all available information – instead, they use a certain subset of features that prove most useful for classification during the training period. Facial images can be optimized so that these features are as similar as possible to the average of all faces in a population. For now, their master faces have been shown to successfully replace 20% of faces in a database of 13,000.
What are the future implications of this? In any case, biometric authentication is more secure when combined with property or knowledge factors in multi-factor authentication. In a few years, people with average faces (or other biometric features) may have to rely on tokens or passphrases even more than their more unique-looking counterparts.