Skip navigation
The Role of Uncertainty in Authentication
Duo Labs

The Role of Uncertainty in Authentication


Here at Duo, we think about authentication a lot. We’re constantly on the lookout for new techniques and technologies that will make it easier for users to authenticate, more difficult for attackers to impersonate a user, or both at once. WebAuthn is a great example of this. What really grinds our gears is when we see bad auth decisions, such as mandatory password rotations, security images, or the use of social security numbers as passwords. But in this article, we’d like to avoid focusing on bad auth, and instead focus on uncertain auth.

There are a multitude of different techniques used to authenticate someone or something, and it can sometimes be difficult to reason about a given technique’s security properties, which makes trusting it difficult. Without clarity of how an authentication technique actually works, it can be difficult if not impossible for users to make a reasoned decision about its security. In this article, we aim to surface uncertainty as a qualitative metric by which authentication techniques can be judged, as well as illuminate some pitfalls this uncertainty can lead to in practice.

Authentication Basics

Authentication is the act of identifying a user or other entity. A given authentication technique, such as verifying a user’s password, is considered strong if it is difficult for an adversary to impersonate the user. However, performing strong authentication can be cumbersome, depending on the technique, and so many applications perform a stronger, less convenient, authentication only at certain high-consequence points in time. For instance, when logging in to a website. For convenience, these applications often then rely on a temporary proxy of that authentication, such as a session token bundled in a user’s browser cookies, for continued authentication with every page request. Not every authentication mechanism must be strong. Often, we can use multiple mechanisms to compensate for weaknesses in one or another.

Over time, more convenient authentication mechanisms have been developed, such as using biometrics. These authentication mechanisms aim to reduce user friction and make the process of manually authenticating as simple and easy as possible. Perhaps someday we humans will develop a strong, automatic, continuous authentication mechanism, but today increasing authentication usability usually comes with a tradeoff. And unfortunately, It can often be difficult to reason about the security properties of the authentication mechanisms that are easiest to use.

Dependable Authentication Techniques

Let’s start by considering a few authentication techniques that are simpler and easier to reason about. The first of these is password authentication, still likely the most common authentication mechanism in use today. Passwords are something you know and are therefore difficult to steal in the physical world unless the human does something like write the password down on a post-it (these are things that grind our gears). Humans can be tricked into disclosing their passwords, database breaches can lead to passwords being cracked and stolen, and poor password choices can lead to major headaches for both users and administrators. Because of these problems, we hope passwords die and are replaced with something better. However, passwords are conceptually very simple. If you possess a user’s password, you can authenticate as that user. This makes passwords very dependable (if not always convenient) and users can choose to take straightforward precautions such as using complex passwords and different passwords for each site to help protect their accounts.


A security key, such as the Yubikey, is also a simple and dependable mechanism for authentication. Security keys store a secret link to an identity, similar to a long complex password. Like passwords, if you know a user’s secret, you can authenticate as that user. However, the secret can never leave the security key, and security keys aren’t susceptible to phishing or credential reuse attacks like passwords are. While the math that proves these security properties aren’t simple, they are verifiable and standardized, and so we can reason about the overall security of security keys in a straightforward manner. The two most promising attacks against security keys are 1) physical theft of the key itself and 2) tricking a user into tapping (i.e. activating) a security key for an authentication attempt they didn’t initiate, which can be mitigated in some cases.


Another authentication mechanism that is simple to reason about is authentication delegation. That is, delegating the authentication to a trusted third-party, and then accepting the result. We see this commonly with email and SMS verification codes, email password reset links and sign-in with (Google, Facebook, Apple). If the user can prove access to another account that is known to be linked to the same identity, they can be authenticated. Of course, if an adversary is able to compromise that other account, then they can compromise all linked accounts. This is why email provider accounts are often the most critical accounts to keep protected, even more so than your bank accounts, since with access to your email account an adversary can reset any other linked account credentials they want. Whether it is desirable or not, authentication delegation is certainly dependable and easy to reason over. Users know that to protect their Spotify playlists, they must also protect their linked email account.

Uncertain Authentication Techniques

However, things get trickier with other kinds of authentication. One mechanism that seems simple at first glance is proximity. Proximity is often used to automatically lock a computer screen when the user walks away, or to automatically unlock car doors when a key fob comes within range. However, even understanding how the technology works, it’s unclear whether we can rely upon this proximity detection in important contexts. Using the car unlocking situation as an example, how far can the user’s key fob be located from the car? It depends on the strength of the wireless signal and whether objects or walls obstruct the signal. If it’s too short, it may not work properly, but if it’s too long, it may allow a car thief to steal the car. Further, proximity authentication is vulnerable to relay attacks in which an adversary effectively extends the range of the proximity detector without the user’s consent. Tight timing constraints can be placed on proximity methods to attempt to defend against relay attacks, but this potentially comes at the cost of reliability. 


Biometrics are another area rife with uncertainty. Perhaps the most common biometric authentication mechanism is the fingerprint scanner. It scans a user’s fingerprint, compares it to a previously-recorded image, and if the two images are similar enough, authenticates the user. And in only that cursory description, we have a number of ambiguities that make fingerprint authentication difficult to reason about. Let’s discuss how the fingerprint itself is scanned. Depending on whether the scanner is optical, capacitive, or ultrasonic, it may be possible to fool with a 2D or 3D printed image of a victim’s fingerprint. Then, beyond the hardware imaging of the fingerprint itself, can we trust the image comparison algorithm to be both sound and correct? Fingerprint scanning software tends to be an opaque black box, even on somewhat open platforms such as Android. Because of this, it becomes more a matter of trusting the vendor (Apple, Google, Samsung, Microsoft, Lenovo, Dell, etc.) to have a secure fingerprint scanning implementation, rather than confidence in the entire technology. Fingerprint authentication may be enough to deter the average phone thief, but it is uncertain whether a given implementation will protect against a targeted attack.


Next, we have facial recognition, which has been used in traditional and mobile OSes for logins, as well as by law enforcement and world governments for tracking purposes. Despite the ease of use, it is difficult to reason about the security properties of any given facial recognition solution when used for authentication. Focusing on the most-advanced such as Apple’s FaceID, we are still left with many questions. Apple claims the false positive rate of FaceID is 1 in 1,000,000, using depth perception via an infrared camera to build a 3D facial model before unlocking the device. This is to protect against 2D printed face attacks. However, this number is apparently based on the probability that a random face will unlock your device, indicating that there are perhaps 7500 other people on Earth whose faces can natively unlock your phone. And there are very few details provided by Apple about how this number was reached.


To avoid false negatives when users change their facial attributes (facial hair, sunglasses, etc.), FaceID must be somewhat permissive. Because of this, researchers periodically demonstrate new attack methods against even its most-recent versions. And this author is actually a fan of Apple’s implementation in practice. Looking at other vendors that may use only a single camera for face unlock, the results are far, far worse. The confusion about which vendor’s technology is secure and which one is not surely leads to a false sense of security in some cases. Facial recognition is another biometric technology that may be incredibly convenient, but it is difficult to quantify its security properties.

Voice authentication is an interesting mechanism, simply because it is often used for telephony and call-center applications where other, stronger, authentication techniques like security keys are infeasible. Voice authentication may attempt to serve as an alternative to the operator’s confirmation of personal data to authenticate, such as asking for a user’s account number and/or security questions. In voice authentication, a user records an assigned verbal phrase to build a profile then repeats the phrase at a later date to authenticate themselves to the provider. The provider will use the tone and inflections in the user’s voice to differentiate a legitimate user from an imposter. 

Intuitively, this technique appears to be trivially exploitable by an adversary taking a voice recording and then playing it back. This is akin to using a 2D image for face unlock, or an optical scan of a fingerprint; both techniques that are still used, but really shouldn’t be. Commercial solutions claim they can detect anomalies and repeated recordings, and perhaps some can. However, all criticisms of the above biometric techniques apply here. How does the classifier work? Whose implementation is being used by the commercial entity the user is calling? While voice authentication may seem to make the account more secure by requiring an adversary to capture a voice recording, in practice, human voices often change due to temperature, stress, sickness and other physiological reasons. How permissive is the classifier that must handle these continual changes? Voice deepfakes are a thing. It is entirely unclear how resistant to attack this technology is. Additionally, when voice authentication fails, the alternative is typically to fall back to another authentication mechanism anyway, such as verifying personal data. Consequently, attackers may gain more options to attack accounts when providers use voice authentication.

On the far end of the biometric space lie Implicit Authentication (IA) or continuous authentication techniques, such as measuring the accelerometer data as a user taps the virtual keyboard or monitoring a user’s gait. To date, IA is not something we’ve seen outside of early research, but finger vein scanning technology appears poised to bring it to a wider audience. However, while vein patterns look pretty unique, it is difficult to find quantifiable data about how unique they really are. Most IA studies seem to test for false positives and false negatives within a very small population (less than 1000 participants) and without large population testing, it is unclear whether these biometric features are even unique to each user, let alone resistant to adversarial spoofing. It is very difficult to reason about the security properties of such techniques without much more information than we currently have.

Consequences of Uncertainty

Despite the criticisms listed above, we believe having biometric and other authentication techniques available is a great thing, especially when threat models are considered and the technique’s strengths and weaknesses are well suited to the use case. Biometrics tend to be extremely convenient, and the best implementations can raise the bar high enough to provide a very secure authentication mechanism. The purpose of identifying uncertainty as a qualitative metric by which to evaluate authentication methods is to highlight its potential negative effect when these are not considered properly, which is often the case.

Uncertainty brings about the potential for misuse, leading to an actual reduction in security. For instance, FaceID is pretty secure and Apple spends a lot of effort to make it so. It also advertises to users that FaceID is a secure solution. But will users recognize that face unlock on Android devices is often trivially bypassable? Should we expect users to know how each vendor’s product works? Is it even possible to do so, with each biometric authentication system being essentially a black box? We’ve reached a point where in many cases, we must place our trust in the vendor instead of the technology because, unlike well-published best-practices for password storage or the design documentation for security keys, no major vendor seems to publish their fingerprint or facial recognition algorithms, or even statistics on their efficacies.

Build for the Future

As security professionals, we can do better. We can turn to standards bodies like the IEEE and W3C to help foster secure biometrics standards in a way that is visible and transparent and reduces the uncertainty around these authentication mechanisms. We can encourage third-party testing and analysis of these authentication mechanisms, with published data that can be used to verify the efficacy of these techniques. With the help of standards and certification organizations, we can ensure that snake oil techniques can no longer masquerade as legitimate security measures. By reducing the uncertainty surrounding these authentication techniques we can make them more secure.