Although the term “zero trust” is a popular term for the alternative security model that everyone’s talking about these days, it’s not always clear what it means, or whether it describes what policy changes you may want to make in your organization.
This is because it depends on your definition of the word “trust.” There are two possible ways to look at it:
Trust = Granting access to resources without verifying beforehand
This is what John Kindervag means by “trust,” and it’s why he coined the term “zero trust,” because you should never do this.
Or you can define it another way:
Trust = Granting access to resources because you verified beforehand
This is why Duo refers to its offering as “trusted access.” When someone wants to connect to a system or application, you authenticate the user, check out the device, and make an access decision based on however many factors you want to consider for that particular resource.
The important thing about trust is that it’s neither binary nor permanent. You don’t trust someone to do anything at all; you trust them to do a particular set of actions, on a particular system, perhaps on behalf of a particular entity (such as a third party accessing a client record as a member of one brokerage firm). And you don’t trust them forever; you trust them for as long as certain risk-related conditions are true. You might stop trusting them in cases where enough time has passed that their password might have been compromised, or when their endpoint becomes vulnerable to a certain exploit by virtue of its software becoming outdated.
This is the reason why network perimeter-based security is less than optimal: organizations tend to trust a user forever, to do anything, as long as they come from the right IP address and give the right password. We have known for a long time that open-ended trust is a bad idea, but given the available technology, it’s been a tough problem to address — until now.
These trust conditions change a lot more often than they used to. Users have different devices, some of which are personal ones; they want access from different locations; the applications they want to use aren’t in the employer’s data center. Criminals can more easily get a username and password and reuse them in an automated fashion, at scale. As a result, we need to check more factors and do more verification before deciding to grant access for a fixed period of time.
What if we never trusted the user? Unfortunately, that doesn’t work too well either. Any amount of additional friction annoys them. The most popular types of additional authentication require the user to demonstrate physical presence and control by taking a physical action: clicking the green button on a push notification app or tapping a Yubikey.
Even doing that once a session can result in pushback from your customers, depending on what they’re used to doing. This is why administrators have the option of configuring “remembered devices” and grace periods, to balance the risk windows (the time in which a certain risk might increase to unacceptable levels) with the user experience.
Whether you call it BeyondCorp, zero trust, or any other kind of model, the issue of trust is central to how you configure your access policies.
- What do you trust the user to do?
- For how long?
- What changes will require you to re-verify the factors?
- How does this translate into an acceptable user experience?