Security news that informs and inspires
Andy Ellis standing on stage against a black curtain and a RSAC 2019 purple banner

Improve Risk Perception, Get Better Decisions

SAN FRANCISCO——It is a trope within the security industry that humans are “awful” at risk management and make poor choices, when in fact, humans are rather good at making decisions, Andy Ellis, CSO of Akamai Technologies, told attendees at RSA Conference.

A stakeholder from the business side approaching the security team rarely looks forward to the conversation. Business owners expect to be held to an “impossible standard,” and the security teams tend to focus on the “horrible risks” being taken and react with “Why are you doing this?” Neither side is thinking about the reasons for the other side’s behavior, but immediately finding fault because they don’t understand what circumstances led up to the decision.

“In their mind, security is the bad guy. We are the people whose goal is to tell them how ugly their project is, and all the poor choices they made, and how we don’t think they should be employed at this company anymore,” Ellis said. “We’re not inclined to work together. We’re telling the story where they are the villain. In fact, they’re telling the opposite story, where we’re the villain.”

The decision made by the business may seem incomprehensible to the security professional—but it also presents an opportunity for the security team to learn why the person made that particular choice.

Ellis used the “OODA loop” decision-making model developed by United States Air Force Colonel John Boyd to lay out his case for why humans were “awesome”—not perfect, but pretty good—at risk management. The model frames decision-making as a recurring cycle of observing what is happening, orienting or filtering the raw information through past experiences and cultural values, deciding what to do next, and acting on that decision.

Context Matters

Observing involves paying attention to the world, but also looking at the myriad of inputs and picking which ones are important. (“Is someone shooting at me or is it a sunny day?”) Framing the information in light of what is happening helps to make sense of all the different inputs. If the person is giving a talk on a stage, then information about the number of pedestrians outside will not be relevant, but that same piece of information will be very important if the person is driving down the street.

“This is a challenge we have, that it is hard for us to put ourselves in the mode of our counterparts when we are engaging in anything, let alone complex conversations about risk,” Ellis said.

Organizations have “historical paranoia,” where the focus is on not doing something that previously got someone else in trouble, without explaining why. In fact, if anyone asks for the reason, the question is dismissed. For example, many organizations have a security policy requiring passwords to be changed every 90 days. It made sense to do so when it took about 90 days to crack a password. Nowadays, passwords can be cracked even sooner, or have already been stolen through other means. Even though 90 days is no longer helpful, organizations persist in following this policy because that is what security teams are used to.

Another example is writing down passwords and putting it in a password vault. It is good advice, but it’s the “exact opposite” of what security professionals tell people, Ellis said. The context the security professionals have is often the wrong context for the world people live in, and the disconnect is one of the reasons why one side can’t understand the decisions made by the other side.

“We collected a list of everything that ever got anyone in trouble, and say, ‘Don’t do those things,’ but we don’t understand why we say that anymore,” Ellis said.

Assessing Risk

Some risks are more straightforward to understand than others. Showing up to hear Ellis speak was a risk, in case he was a terrible speaker, but the impact would be 50 minutes lost. Falling asleep during team meeting could result in getting fired. For more complex situations, the benefits and trade-offs become obscure and the risks are harder to understand.

In the 1970s, the risk of buying a Ford Pinto was clear: the gasoline tank was in the back so in a rear-end collision, the Pinto’s gasoline tank would be damaged (and leak). Fast forward forty years later, and cars are now networks of computers that drive themselves around. Pushing on the accelerator is no longer a mechanical operation, but one that kicks off multiple computer programs. A computer has many things that can go wrong and fixing it is harder than fixing a mechanical problem. The stunt hacking showed that attackers could take over a Jeep Cherokee, the mindshift—of thinking of cars as computers—has to happen first in order to really understand the dangers of driving a Jeep Cherokee.

“I can’t really explain, at the level I explained the Ford Pinto, what the bad design choice was [for the Jeep Cherokee],” Ellis said.

People adjust how much they are willing to lose or spend based on the situation. In a game where the player is asked to place a bet and then guess a number that would be rolled on a 20-sided die, the likelihood of winning is the same regardless of the size of the bet placed. Logically, if a person is willing to play for $1, that person should be willing to play for a million dollars, but that ignores cost context. As people’s perception of risk went up, they reduced the impact of that risk.

Nearly everyone in the room indicated with raised hands they would play this game if the initial bets were $1 and $10. People started dropping out of the game when the bets went up to $100, and by $1,000, the majority of the people had stopped playing, and only one person held out till $100,000. People value something by what they have to give up.

“$1 is change for me. $10 starting to be interesting, that is a drink. $100, that is a nice dinner at RSA [Conference]. $1,000, that is interesting money. I am totally out,” Ellis said.

An organization has to consider the cost context in its risk management discussions. A product manager will not react well to the idea of pushing out a product’s release date to address some risks because they consider those dates fixed and don’t dare alter them. A security leader could instead suggest moving the feature to the next cycle to buy time to address the risks without impacting the original release date.

“When [the decision] is in their cost context, it’s really invaluable,” Ellis said.

Understanding Decisions

Understanding what people pay attention to helps in understanding how people view risk and why they made the decisions they made. The human brain is constantly deprioritizing information that doesn’t seem unusual or directly relevant because otherwise, there is just too much sensory data. to deal with. Drivers see pedestrians while driving but won’t remember details about the ones that didn’t run into the street because they weren’t important to the current situation. People tend to pay attention to things that are new and not something that happened last year, or things that affect their “tribe.” People also remember “surprising” things that “feel true.” Problems that are “far away”—timewise, geographically, or social group—tend to get dropped.

Understanding the decision-making loop is important because adversaries are doing the same thing, observing what the organization is doing and modifying their actions to inject conflicting information or hiding their activities. Organizations can look at their loops the way adversaries do to find areas of improvement. A key step is to reduce the amount of information coming in—if the organization is paying attention to things it isn’t acting on, then it shouldn’t be paying attention to those things, Ellis said. Along with better instrumentation, organizations need to review their models to make sure they have accounted for potential traps in how they frame the data. Once a decision is made, the organization has to check the assumptions to make sure the outcome makes sense. Finally, organizations need to make a plan and practice that plan so that everyone knows exactly what needs to be done when something happens. That may take the form of table-top exercises or training.

People make decisions based on what they paid attention to. Shaping the way they perceive risk, by taking into account their models and things they fear, influences the end result—the decision and the actions taken.

“Humans are situationally awesome at risk management,” Ellis said.