Security news that informs and inspires

Q&A: Casey Ellis

Casey Ellis, founder, chairman and CTO of Bugcrowd, recently joined Lindsey O’Donnell-Welch on the Decipher podcast to discuss how vulnerability disclosure programs are changing. This is a condensed and edited version of the conversation.

Lindsey O’Donnell-Welch: What first inspired you to focus on vulnerability disclosure as part of your career? Was there an epiphany moment or experience that you made you want to focus in on this, or did it just happen?

Casey Ellis: No, there were definitely a couple of epiphany moments. I've always loved hacking, and innovation as an adjacent train of thought to hacking, so that was the precursor to it. With Bugcrowd, that meant really just looking at traditional solutions, traditional approaches, when it comes to outsmarting the adversary, and realizing that the math is wrong if there's lots of people building software, and doing awesome stuff, making mistakes that introduce vulnerabilities in the process. And then you've got this undefined crowd of adversaries with their own incentive to find those issues and exploit them. One person being paid by the hour, no matter how good they are, is eventually going to lose. I think the irrational founder gene in me got locked on that idea and I wanted to try to find ways to solve it. I think, on the other side of it, having grown up in the hacker community, wanting to keep my buddies out of jail, in a sense, that side of it in particular ended up turning into Disclose.io. But on the background part of things, it was really about how do we normalize and promote the role of the hacker, and people that can think differently and do bad things to computers, but for good reasons. How do we normalize that in the market? Because at that point in time, if you're a hacker, you're inherently bad and inherently evil. I think in the 10 years since there's been a lot of progress in folks getting their head around the idea of there being like a digital locksmith, and not that we're just all burglars or something like that.

Lindsey O’Donnell-Welch: One major part of vulnerability disclosure, and the conflict that we've seen around that, has been how hackers are perceived by companies at a broader level. How has the term “hacker” changed since you first started looking at vulnerability disclosure policies over the past decade?

Casey Ellis: I think "hacker" is still definitely a term that frightens some people. I think the broader view - in technology now, not just in security - is that it is actually a dual-use concept. It's not a concept that has an inherent moral loading, which is kind of where we started: If you're doing this sort of thing in technology, that means you must be nefarious or malicious, and therefore, we shouldn't trust you. Therefore, if you're a hacker, you're a bad person, please go away. I think we're past that now, broadly speaking, to the point where people see it as a morally agnostic skill set or craft or trade. And it really becomes a question at that point of where you draw your ethical lines as you do the hacking thing, so to speak. We've put a lot into this, as have a bunch of others over the years, to try to reclaim the word “hacker,” and I don't feel that will ever be a solved problem fully. But I think reintroducing this idea of it being possible for that sort of thing to be done in good faith, that wasn't true when we started. I believe it is true now, which is awesome.

Lindsey O’Donnell-Welch: You still hear these stories, like back in 2017, you had the whole incident with the DJI, the drone maker, and the security researcher who tried to report a bug and was met with threats. And even this past year, you have the drama with the Missouri governor who was vowing he would prosecute a journalist who reported a security flaw. So we're still seeing these types of incidents crop up, but then at the same time, I would say that there have been a lot of positive initiatives in the U.S. government: Look at Hack the Pentagon or even this past year, CISA and the DHS have done a pretty good job of making it known that they want to recruit hackers, and that this is is a good thing.

Casey Ellis: Most definitely, I think CISA’s work around the binding operational directive that they put out with OMB to mandate that across the U.S. federal government, they put a lot of work into really explaining what was going on to a bunch of folks that would probably be unfamiliar with it at first pass. Because that is the starting point, when you talk to someone about hackers being helpful, and now they're going to tell you where you've made a mistake, like that can be quite an unusual and confronting idea the first time you hear it, so you have to basically take people through the why and the process and why it's actually really important as something that you do.

The other thing that CISA put a lot of work into was including Safe Harbor clauses in the recommended boilerplate language. Some of the stuff actually drew from Disclose.io and it ended up with parts of it going back into Disclose.io as an open source standardization project. And that's really reflective of the fact that the laws haven't caught up with with this idea of dual-use and the ability to hack in good faith. Most things are still written in a way that assumes that you're breaking the law, and you've got to prove that you aren't, which is not like most other crimes. So, the idea of putting templates out there that allow organizations to create a carve out for people that are working in good faith; that, again, is something that's fairly novel, and fairly hard to do. When lawyers get led into uncharted territory, they tend to get quite verbose in the interest of being legally complete, and that ends up being confusing for folks. So CISA did a really good job of shortcutting that, and Bugcrowd’s proud to be the partner of choice on actually delivering those VDPs and, in some cases, bug bounty programs, out to the federal government here in the U.S.

Lindsey O’Donnell-Welch: That partnership, can you talk a little bit more about the impact of that, especially across different government agencies, and how the rollout has been?

Casey Ellis: Yeah, definitely, the big thing is that it's not as simple as just putting a policy out there on a website, and then opening an email inbox, and then optionally offering to pay people if you're taking a VDP and actually turning it into a bug bounty program. There's the vulnerability triage, there's the remediation workflows, there's making sure that information gets to the right place within the organization, so that stuff can get fixed. And in the meantime, there's someone who's found an issue who's waiting for a response and trying to understand whether or not they've been helpful and if that thing's gonna get fixed. All of that process needs management and that's a lot of what Bugcrowd built out in the form of our team and the stuff that we built into the platform to really simplify that, to specialize in doing that sort of thing well. Government agencies aren't usually on the cutting edge of technology, they oftentimes need a fair bit of help with implementing new ideas like this one, so this is where we come in to help them actually run the program but also guide them through setting it off if that's needed as well.

Lindsey O’Donnell-Welch: I’ve heard plenty of stories of companies that want to start rolling out a program but they don't think about the triage, as you say, or the reporting aspect of it or even being able to handle the kind of the different vulnerabilities that come up.

Casey Ellis: Yeah definitely. With starting it up, this is something that we saw a lot of early on in Bugcrowd when bug bounty as a concept got a bit of a halo effect around it. And that still exists. But early on that was the dominant feature, people just wanting to do it because they wanted to get in TechCrunch and make a big noise about how good they were at security without necessarily thinking through the downstream adjustments that it's ultimately meant to cause. To me, public programs, in particular vulnerability disclosure and bug bounty programs, the thing that's actually the most powerful about them is recognition outside of the security team within the organization that “yeah, mistakes happen, to err is human, we are going to have things that are vulnerable that we didn't intend to put there.” That's not a truth to hide from, that's a truth to basically just accept, and then try to start working with. Let's operate on the assumption that to err is human, let's figure out where the risks that are introduced as a byproduct of that exist, fix those, and then try to learn from that in ways that reduces how frequently that happens in the future. You can't just run headlong, that's not a switch that you can necessarily flick on as an organization, it's usually a process of crawling first and walking and then running.

“Let's operate on the assumption that to err is human, let's figure out where the risks that are introduced as a byproduct of that exist, fix those, and then try to learn from that in ways that reduces how frequently that happens in the future.”

Lindsey O’Donnell-Welch: Right, and even adopting that as a mindset, that security errors are going to happen, and that mistakes will happen, that seems to me like it could be a whole entire cultural mind shift for the work environment. So it's more tough than even just rolling out a simple program. It's the entire environment that needs to change.

Casey Ellis: Yeah, and even the management culture, leadership, elite, all these different things, I'm increasingly convinced - doing all this stuff with Bugcrowd, with Disclose.io, having worked in security pretty much since I finished high school, I'm passionate about this space of vuln disclosure and crowdsourcing, but I'm just fascinated with security in general as a concept - and thinking about it through that lens, I'm increasingly convinced that a lot of what we see on the internet is the product, ultimately, of people not thinking that that would be possible in the first place. Like this idea of “ostrich risk management,” I call it, where if you bury your head in the sand, all of a sudden, the problem won't matter anymore. I think there's been a period of time in technology and on the internet, where that has actually been true, where people have gotten away with not doing as much as they maybe should have. But especially over the past two years with changes in the use of technology, and I think changes in adversary behavior as well, that's fairly obviously not a good strategy going forward. So helping people make that shift is something that we do a lot.

Lindsey O’Donnell-Welch: When companies are looking at these good faith type policies and they're putting their heads together, where is that decision making process being handled? Is there any kind of collaboration with security teams?

Casey Ellis: Usually, left to its own devices, it'll be the security or the product team, in some cases, they just pick something up, do copy, paste and kind of slap it on our website, and off it goes from there. That's the thing that happens sometimes. More often, though, they'll interact with the in-house counsel or external counsel, sometimes the marketing team gets involved, to make sure that the verbiage is on brand and different things like that. It can end up becoming quite an involved process and a bit of a decision by consensus. And that often is why these policies end up being a million pages long and having all sorts of confusing stuff in them, because everyone wants to add something. That's a part of what Disclose.io puts out there as a boilerplate to say, here's the simplest possible version of this, that's going to be as complete as it can be. And frankly, this is where Bugcrowd comes into it as well, in terms of helping organizations navigate that, when they've got different stakeholders that are trying to work out is this a good idea or not? How do we frame the language to make it safe? All those different things that can be a pretty complicated conversation to have for the first time I think, for folks that have been through it before it gets a lot easier. But if you've never interacted with this sort of thing before it can be quite like “Whoa, what the hell are we doing?” So oftentimes we’ll get involved as Bugcrowd to actually help basically align those stakeholders, understand what the different concerns are, and try to bring that back to a midpoint.

Lindsey O’Donnell-Welch: You've had experience both in Australia, and then also San Francisco - you may have a good perspective on how the state of vulnerability disclosure is different around different areas of the globe right now. Is there any place worldwide where they might have more mature vulnerability disclosure guidelines, or rules in place?

Casey Ellis: I'll speak to the ones that are more mature. The Netherlands have been really good with this stuff for a long time. There's the t-shirt, “I hacked the Dutch government, and all I got was this lousy t-shirt,” like that has been around for 12 or 13 years, and it's a collectible. So those guys in particular are a country that basically, at some point in time, decided that this was really important, put the effort into standardizing it and normalizing it, and have reaped the benefits from it since. Estonia is pretty amazing as well. It's interesting, because you've got these smaller countries population wise that are able to be a bit more agile, that just decide to do a thing and then go off and do it. Honestly, I think the U.S. has been pretty incredible in terms of its leadership in this area. You know once the Hack the Pentagon stuff got rolling, that kind of rolled downhill? You know, in terms of congressional bills, Hack the XYZ, coming out of out of Congress, and then you know, BOD 20-01 out of DHS and OMB. Australia is catching up, we’re talking a lot with the Department of Home Affairs around their cybersecurity strategy, and they included vulnerability disclosure in one of the four recommendations or four primary recommendations in that document... But in terms of it being a thing that we just do, I'd say that they're a little further back from the U.S. or our Dutch friends. But there's work going into basically correcting that. And I'd say the same goes for the Singaporean government, it’s working really actively on this stuff, or Dubai. There's different places around the world where you see it switched on. And it gets moving from there. So there's lots of activity in Europe around starting to fold this in behind some of the leading edge stuff they did around privacy, because you can't really have privacy without security. So this is coming in behind that as a way to make sure that some of the stuff they've done to protect citizens’ data is actually possible in the first place. Because the controls have integrity to them. So in general, it's a mishmash... at this point in time, there's a decent cohort of countries that are basically, you know, actively working on catching up.

Lindsey O’Donnell-Welch: Looking ahead to 2022, do you see any big trends emerging when it comes to vulnerability disclosure? I know that you have your Inside the Mind of a Hacker Report too, I would love to hear any kind of takeaways from that as it relates to where things are going.

“I think the pandemic has driven a lot of introspection, in the community and on the researchers’ side, with people wanting to take better control of their destiny, so to speak, from a career standpoint.”

Casey Ellis: Yeah, most definitely. I do think we're heading into another year that has a lot of elections in it. And I think at this point in time, the relationship between information warfare and cybersecurity, that's always been a thing, but I think it really showed its head in 2020 in a way that a lot of people understood, in a way that they hadn't really understood before. There's a lot of stuff coming up next year, the midterms here in the U.S., there's probably going to be a federal election in Australia, and lots of other countries. So those are all opportunities for that subject to come back up. And it does come back to like, to what degree can we trust the systems that we rely on to actually conduct the democratic process? To me that's a really important question to be to be able to answer, I do see VDP as a fundamental tool to combat both the security side of that, and also the information warfare side of that, because you can go to a voter, even if they're non-technical, and say, it's neighborhood watch for the internet, and they'll get it right. Doesn't mean that it's perfect, but it means that I've just explained to you something that we are doing to try to keep your vote and your information safe, which is good. I see that playing out pretty interestingly next year.

Probably another one is just the temerity of attackers just in general. People don't seem to care as much about getting caught anymore. And that, to me, was a pretty big shift in attacker behavior that started in 2020, but has continued and I think spilled over into the ransomware groups and cybercriminal operators in 2021. That combined with the fact that ransomware operators, their business models working quite well. So that means that there’s a lot of money to spend on tooling and innovation. So reinvesting some of those proceeds into being more effective, where that goes next. I'm not sure but as an entrepreneur, putting myself in their shoes, that's probably what I'd be working on right now. So I'd expect to see the outcome of that start to play out next year. That one's a little terrifying. But I think just staying on top of it, there's a lot of work going into that just to combat ransomware, not so much as a specific type of malware. It used to be that attackers could only monetize things that were of value when they stole them, but ransomware basically introduced this idea of denying service and being able to monetize that which broadens the scope of your potential attack surface as a criminal operator. So if it works, it's going to continue to happen because that's how capitalism works.

And with the “Inside the Mind of a Hacker Report” I think the pandemic has driven a lot of introspection, in the community and on the researchers’ side, with people wanting to take better control of their destiny, so to speak, from a career standpoint: "I want to be in a position where I can actually have more of a direct input into into what I'm getting back." And just thinking about the Great Resignation, sort of millennial midlife crisis thing that's happening right across the world, it has definitely played out in the hacker community in some ways that I think they're actually quite productive. Like 80 percent of the folk we surveyed had found vulnerabilities they'd not encountered before the pandemic that was partly a product of technology change, but also because they're all learning new things, which I think is pretty awesome. On the tech side of it, 74 percent of the folks responded that vulnerabilities in general had increased since the onset of COVID-19. That whole idea of digital transformation and how quickly we all had to pivot to basically respond to the pandemic, like speed is the natural enemy of security when it comes to things like that. So, I've seen a lot of shifts in vulnerability patterns that do oftentimes look like a product of people just doing stuff quickly and not necessarily thinking through the downside. So I think we're going to continue to unpack how that's played out, over 2022 as well.

Lindsey O’Donnell-Welch: I’m curious about how the pandemic and this changing viewpoint of work and remote work is really having an impact on the InfoSec community overall.

Casey Ellis: Definitely, but I mean, my own experience of that is getting stuck on the opposite side of the planet for 18 months, like that was that was a thing. I think everyone's had some sort of version of that, in terms of ways that they've had to change how they operate and work. But you know, more generally, this idea of just access and distributed access. Like another piece in the “Inside the Mind of a Hacker Report” report was 45 percent of the respondents believe that this restrictive scope actually inhibits the discovery of critical vulnerabilities that are meaningfully impactful to an organization. I think that's more true now. That's always something that I've believed is true, but it's more true now than it was before because everyone's accessing things from the outside. So this idea of, as an organization, it's not just your front door website, it's your entire entity, and all of the different ways into that. Ultimately, if I'm an attacker trying to get in and create an outcome, I don't care how I do it, I just want to get in. So scope needs to reflect that even more urgently next year than it has in the past.