Wade Baker, co-founder of Cyentia Institute, and Ben Edwards, senior data scientist at Cyentia, recently joined Dennis Fisher on the Decipher podcast to discuss the design and analysis of the Cisco Secure Security Outcomes Study, which surveyed more than 5,000 security professionals around the world. This is an edited and condensed transcript of the discussion.
Dennis Fisher: How do you start to dig into a big data set like this?
Ben Edwards: It actually starts before we get the big blob of data right? We have to design the survey and when we design the survey we have to keep in mind some research questions we'd like to answer. Survey design is probably the less glamorous part of the data science we do, but it is super important because if you don't have a good survey design, you're not going to be able to answer any interesting questions.
Wade Baker: We've participated in research before where the survey data was dumped on us after it was conducted and to be honest, that's a lot harder and less desirable than when we can take part in the design of the survey and there was a lot of discussion upfront. What do we even want to study in this second volume? What kinds of things do we want to know? How do we work those into actual survey questions?
Dennis Fisher: So as you're designing that survey at the beginning, do you take into account the people that are going to be answering it? Do you think about it from their perspective and think what questions do we need to be asking them that are relevant to their job functions and demographics?
Wade Baker: Yeah, we try to and this survey I would say is you know, depending on the way you look at it, either challenging or not challenging, but there are so many different types of people that take this survey because we have a list of roles that I don't know there's a dozen to twenty different roles that they could choose from all across various realms of security. So it's a really really broad sample in that sense, which kind of makes it easy but at the same time it makes it hard because we study pretty specific domains like threat detection and response in this particular volume and you just kind of know that, hey not everybody's going to know the answer to this question about how they use threat intelligence right? Because this is the IT person, not the threat Intel person. So it's hard from that perspective.
Ben Edwards: And you hope you get the big enough sample that a lot of those kind of variations and answers that you get kind of average out and so you do get to measure those effects and I think there's also the CISO’s perspective is going to be different than the threat analyst.
Wade Baker: Didn't we find that they were way overconfident when we asked them about outcomes and things like that? Didn't they consistently rank at the top of the list whereas the auditors and compliance people were at the bottom?
Ben Edwards: Yeah, the compliance people were the least excited about the performance of the program and the CISOs were like yeah we're doing great, hundred percent. It's also hard because we want to ask specific enough questions that people have specific answers to but we can't ask everything or else it'd be like a thousand questions survey and you know we're not trying to do. Dennis Fisher: The the art of asking a specific question I think is very underrated, especially in a survey where you don't have a chance to reframe it.
Wade Baker: It is and even though we try to weed out issues upfront, inevitably we realize later that because we'll be analyzing results and we're trying to think, and this seems really weird what in the world is going on here. And then we realize oh you know? maybe they interpreted this thing completely differently. So there's all kinds of challenges there and I totally agree with what you're saying. It is underrated and one of the harder aspects of doing it right and I think a lot of people just throw some questions out there and get it done and you know it shows when all is said and done.
"The compliance people were the least excited about the performance of the program and the CISOs were like yeah we're doing great."
Dennis Fisher: Wade, you mentioned that this survey went all around the world. How much do those cultural factors play into the answers that people might give?
Wade Baker: Honestly, we could do an entire study on just that because we have created regional views.I do think a lot of it is cultural. For instance, one thing that sticks out in my mind both last year and this year when we ask about the level of success in the outcomes that we measure in this study and the strength of the security practices that we measure, Japan is consistently either last or second to last in terms of overall level of success and strength of security practices. I just doubt that is because they're actually lower. I think there's a cultural humility and a sense of reality. I mean it clearly shows up in the data and it's consistent again. It's consistent over the last two times we've done these with completely different samples.
Ben Edwards: When you see that result over and over again with something like Japan, you start to think, Yes, maybe there is something to that.
Wade Baker: And then there's some places like we asked about zero trust and and SASE in this one and there are some places where you think, I don't know if this is the forefront of zero trust adoption. But they're way ahead. When you ask, Where are you on this scale of maturity and integration. I’m way, way, way down the road. So you know we had lots of discussions about how we interpret that. Did they leapfrog because they were able to go there quicker and so they've actually found it easier to to make that turn? Is it a developing sense of what zero trust is and therefore you know yes, we're doing that because it's not as stringent?
Ben Edwards: A lot of the security stuff we do just doesn't make any sense because there's humans in the loop and one of the ones that was sort of interesting was the looking at outsourcing and insourcing. People who thought they were doing really good at threat detection or response were either fully insourced or fully outsourced. I think that makes some sense right? If you're kind of like half in the water, you're not going to do things well. But then when we looked at mean time to remediation, which is actually kind of like a metric of what's actually going on, the people who insourced did it like six days faster on average than people who outsourced. So even those outsourced people think they're doing it very well on this one metric. I don't know what the explanation for that is but I do think that was kind of counterintuitive.
Dennis Fisher: I wonder how much of that has to do with the size of the organization and what their resources are.
Ben Edwards: Yeah, so actually that's interesting that you ask because one of the things we looked at, we did ask about staffing in a couple different ways. We asked how big their company is, just the number of employees. We asked how much security staff they have. I think we asked how much IT staff they have, not just security. We asked about the size of their threat detection staff. And no matter how we sliced it or normalized it by company size, we actually didn't see a terribly strong correlation between the amount of staffing you had and outcomes. So even if you have like one threat staff for 100 employees versus one threat staff for 1,0000 employees, it didn't seem to make a lot of difference on how well you were doing it in threat detection and response. I don't know what the answer to that is, but I do think that having good people is better than having a bunch of people and having good people is better than having a bunch of automation.