After specializing in the identity and authentication space for the past 15 years, I can say that in the biometric industry, they always seem to say “this is the big year”. It references biometrics going mainstream and mass scale from one where its only pervasive acceptance has been physical security or specialized programs. But, each year, it proves to be elusive.
I even worked for a leading biometric company 10 years ago, leading their strategy to become a well-established solution for IT. The biometric story sounds so logical; you don’t need a token, it proves who you are uniquely and is convenient. Well, I was naive. Fortunately, I moved on but many in the industry have not. Looking for next year to be the “big year” they fail to learn from their mistakes that would otherwise get them there.
It is not that biometrics are a bad idea because they are not. But rather, poorly executed and not aligned with the core security principles that are evolving which is the parallel that biometrics must successfully follow if they are to scale.
In order for strong authentication to be valid, either the password needs to be replaced or another factor used in conjunction with passwords that the target application is aware of. Otherwise, the application is open to many of the same attacks as it was before biometrics were implemented. Unfortunately, very few, if any, IT applications natively know what a biometric is and therefore the typical process is to have a client (made by the biometric vendor) encrypt the password in a local cache and decrypt when needed to login and pass it through to the application for authentication.
What was solved here? Not security. The application is still dependent on password attacks without having to even go through this client. The client is seldom pen-tested to know how well it secures that password in its local cache, and we know that hackers don’t always go through the endpoint to execute an attack.
There are server methods where the template is stored on a server or a smart card, and the server acts as a proxy to the application. However, the password at the target application level is still the primary authenticator. Again, go around this to the application itself to achieve such goals of compromise. OTP servers act like a proxy, but for the most part those servers are tightly integrated into the authentication process where the application is aware not just of the proxy but of the process itself. This is why RSA has support for a wide variety of specific applications and protocols, as do other vendors (and OATH if you want something non-proprietary).
Physical access doesn’t work the same way. In this environment, where biometrics has been successful, the applications are built with native support and do not perform any pass-through. So context is key and is not appropriate for vendors that have had success in physical access to claim similar use and security in another environment that when applied is fundamentally flawed.
While other forms of authentication are fairly binary – that is, they are either a definite value or they fail, biometrics are not. Biometrics function by way of an algorithm and their capability varies greatly from vendor to vendor. In fact, as a credit to the biometric vendors, this is really where much of their intellectual property lies (in the mathematical execution of how it analyzes how well the live attribute matches the stored one). They have come a long way. Many are outstanding. However, none are perfect, even by their own claims. All algorithms contain elements of “false accept” and “false reject”. Meaning, if they cannot be right 100% of the time, then once of the previous must occur at least some of the time.
This is due to many factors that are nearly impossible to eliminate completely. All algorithms that I am familiar with also have some sort of internal tuning to make it biased toward one or the other, or somewhere in the middle. The question becomes, if there is a person not authorized to gain access to your most critical data, is it ok for them to do so once in a while? Or, do you choose to have more people that are authorized rejected when it is really them? This is really a judgment call of the company considering the technology and owns the risk, but it all seems silly to me if spending $150 per user for a solution that is supposed to improve this situation (just me).
Verdict: Biometrics is well suited for physical security but poorly suited when applied to IT systems. It is more suited to identification, than authentication. That is, identifying if that is likely the individual that possesses the token used for authentication, rather than authentication itself.
COMPROMISE, RECOVERY AND REVOCATION:
Biometrics are unique to individuals and can’t be stolen – or can they? Reference templates of the enrolled live attribute must be stored somewhere. Unless they are stored encrypted, and generally are not (unless stored on a MOC, Match-on-Card in a smart card with a microprocessor with built-in crypto) then those templates can be stolen. Now, I know the argument from the biometric industry, “the templates are not images of the live attribute but rather a mathematical representation that cannot be reused to reconstruct an image of the live attribute from which it was taken). Well, there have been reports that it can so I would not entirely rule that out. However, I would also not fault all vendors as again, their technology varies greatly where some likely make it a near impossibility and others don’t fare so well. However, I think this is the common debate that is almost useless to have since there are other vectors to explore and exploit unique to biometrics implementation.
If their algorithm is compromised, then the template may possibly be used to reconstruct an image. But this doesn’t matter. Why do this when;
2. Just reuse the stolen template by presenting it to the same authentication process it was intended for authentication. Or to the client (bypass the reader) to unlock the cache where the passwords live. Scary right?
3. Scarier: Once these templates have been compromised, you cannot reissue someone’s iris or finger to make another one that invalidates the previously issued one. Even though passwords are increasingly inadequate security mechanisms, at least the can be changed and reset while PKI certificates are built on a model where revocation is central to its function.
Verdict: Biometrics has a fatal flaw that cannot account for worst case scenario that every model should have the capability to do. Not because we intend to encounter it, but without it we have no security when it is really needed.
HIGHLY PROPRIETARY AND COSTLY:
While there is an ANSI standard for the templates that are enrolled and used, in general most vendors still use a proprietary template and algorithm that places dependencies on sensors, reader hardware, clients and server software. Scaling based on unique configurations are risky and becomes a situation where organizations can easily get hijacked or rip-and-replace and re-enroll all people to get away (if not using a standard ANSI template). This is exactly the type of solution that CIO’s are trying to avoid in principle. And the bigger you go, the riskier it is (hence another reason why you do not see many large biometric deployments in IT settings).
Despite all of this, if organizations do find the right balance and acceptance of risk regarding the technology in their program, if you need to have this be a trusted credential outside of your own organization, or vice versa, it likely lacks the trust that would be required by the other party. Increasingly, organizations have obligations to third parties to either:
1. Provide a level of transparency and assurance that the controls in place meet a specific verifiable criteria that would be used similarly to their own systems.
2. Authenticate individuals to 3rd party systems using a credential issued internally.
3. Authenticate 3rd parties to their own systems using a credential issued by a 3rd party.
If we look at OpenID, a framework that technically enables consumers to use their LinkedIn (or Google, or Facebook, etc.) credential to login to their website by pointing back to the issuer for authentication (to verify it), the thing that is missing is – “what is the process by which the issuer understands who that individual really is at the time of their account creation in the first place and what controls do they have in place where I can trust it to the level that I require?”. The likely answer is that in the OpenID model, assurance are likely low and if you have requirements for higher assurances, you will need some transparency from the third party (or you to provide to your relying parties) by providing a practice statement, certification or some governance models of controls and processes that would answer the question “can we trust their process”.
The PKI model, while complex, commonly supports this by way of public root or Federal Bridge models. While two parties may negotiate direct agreements between one another, it’s not scalable. Public roots and bridges are scalable because they do not require individually negotiated processes and certifications for each relationship. This is specifically why all of the trust models are currently built around them (government, contractors, SAFE Bio pharma, and even individuals to one another that are unaffiliated).
So biometrics can work inside of one large (or small) organization because they can impose their own trust model without external dependencies, but we increasingly live in a world where trust matters. Building an expensive and risky silo that doesn’t really provide defense from many of the attacks on passwords we are seeing (while other methods do) doesn’t seem like a logical choice. It would be remiss to also mention that biometrics neither sign nor encrypt data which is becoming an essential part of any security program.
Verdict: Great for internal programs with no 3rd party dependencies that is used for identification verses security and no foreseen demands for encrypting data or signing for authenticity.
EXPANDING ATTACK SURFACE:
If it were only that easy. Gone are the days of just using strong authentication to keep that bad guys out of the network and unable to login to a computer. It used to be that IT could define the perimeter wherever they wanted the firewall to exist. Now, the perimeter is wherever the device is – and that is increasingly tricky. Biometrics is entirely predicated on the concept of user-centric authentication. If we believe that IoT (Internet of Things) is real (as I do), then these smart devices that are self-configuring and self-functioning interact with not only users but other devices and services on their own (which is part of the whole “smart concept”).
Biometrics cannot effectively perform authentication because there is no user present in these types of transactions. Yes, a forensic collection approach can be executed but this is not ideal (though complimentary) and quite a stretch. Passwords, although horrible, work across both people, devices and services authenticating to one another in any sequence. So don’t certificates, which is why PKI and SSL has been used so long and is increasingly being used into IoT as it matures to essentially add a whitelist model to an ecosystem of permitted devices and services.
Verdict: Unless you believe that IoT isn’t real this is another nail in the coffin for biometrics before the industry has yet to even address the problems that came before it.
Organizations are still struggling with mobile devices. They are either trying to impose specific controls, lock them down or lock their apps into a secure container with less risk. All the while, user experience must be simple and there are likely 100 apps on devices that have varying (and unvalidated security models) on them. Putting a layer of convenience (biometrics) that does not remove the risk of the password) doesn’t really solve the CISO problem here. Further, you have the same dilemma of device and vendor-specific hardware being built-in, or worse yet, covers and cases that are model-specific.
Yes, it’s true. Apple did a fantastic job integrating the biometric sensor and software into the iPhone (and I mean excellent by all means). I love using it on my iPhone, it is very convenient. But, I still need to remember my password, and when I use TouchID, it is only passing my password back to wherever I am trying to authenticate. This goes back to the beginning of the article.
Even though many in the biometric industry constantly obsess over using Apple’s TouchID as THE event of mass scale they have been waiting for as proof, it is just that – mass scale of identification – not security. Even Apple knows this. This is why users still need to login to their devices using their passcode each time their device reboots and for recovery of their account, two-step verification uses a one-time password SMS to verify that same user that may already be using TouchID.
The flaw? Well Apple knows that the Apple Account was created long before the TouchID enrollment when there was (and still is) very little governance to ensure that is the actual individual. So much so, they can’t trust it enough when it really counts. They will find a way to weave TouchID into ApplePay and other services but look for them to revise and strengthen how people are enrolled and validated into their services, TouchID and still – will be a closed ecosystem. And Apple knows this as well.
This article is not a bashing of biometrics, but rather an attempt to educate those considering the technology and are not familiar with the nuances. The biometric industry tends to paint a really rosy picture without engaging in the depth of considerations that information security professionals must deal with every day. This is where the focus should be, and if it is, then I think it will force a real discussion to determine if they are right for your organization – and the vendor to engage more meaningfully.
This article was originally posted on Peerlyst