Skip navigation

Effective October 28, 2019 Duo Security will be transitioning to Cisco's Privacy Statement. View the Duo Privacy Data Sheet.

Duo Labs

Duo Hackculture: Tracking Vulnerability Reporting

Problem solving is a survival technique. Long before the advent of computers, the earliest security “vulnerabilities” were patched by experience. City states quickly learned not to tear down their gates to accept giant wooden horses. Early radio announcers learned to avoid mass panic by noting the fictional nature of their reporting on alien invasions. I assume those who improperly interacted with fire during the dawn of man were quickly dropped from the human user-group.

The first wave of computer creators were pioneering codebreakers and brilliant engineers. As time passed and computers became more accessible, creators of software and hardware were no longer their sole users. Many with early access to computers began to poke around inside their systems -- digging through system architecture was the fastest path to knowledge. In these intrepid investigations, many began to find series of security vulnerabilities. (When we talk about vulnerabilities, we refer to avenues for unintended privilege escalation or access whereas exploits are codified processes to leverage those vulnerabilities.)

Long before the advent of the mass Internet, individuals and groups tasked with protecting and maintaining computer networks began to informally share system vulnerabilities they had found. Others would respond with their functional workarounds. Individual government agencies threw together “Tiger Teams” to test their security, simulating malicious attacks. Still, there was no national movement to report and record vulnerabilities to the public or vendors.

A watershed moment for computer vulnerabilities was the “Morris Worm.” While the worm is often cited as opening the door to a new field of exploits, its lingering impact is how it shaped news coverage of vulnerabilities and exploits. As quickly as the worm spread to a substantial portion of Internet-connected computers, so too did reports propagate among journalists. The FUD coverage in major news outlets only served to feed the newly-minted Hollywood image of the malicious computer man; the rogue console cowboy; the black hat. A somewhat panicked DARPA suddenly faced a crisis of confidence. An attempt to form protection for businesses and the growing consumer computer markets saw DARPA collaborating with researchers at Carnegie Mellon to roll out the Computer Emergency Response Team Coordination Center (CERT/CC). Similar industry teams developed internal strategies for finding and dealing with exploitable bugs.

Curiously defying all reasonable expectation, the private sharing of vulnerabilities did not eliminate errors across all systems and usher in an era of total security and privacy. Disturbingly often in fact, it did not even result in the patching of all of the reported errors (good thing this doesn’t happen anymore). Surprisingly, this actually frustrated a few people. Partially as a response to CERT/CC’s obfuscated model of vulnerability disclosure, a few mailing lists that publicly and fully disclosed and discussed bugs and exploits gained huge readership. Notable among these was Bugtraq, which for a period was moderated by Elias Levy of stack-smashing fame. Historically, these mailing lists were more public versions of their precursor cousins, BBS Boards, where security issues and informal writeups, mimicking white papers, were often posted and discussed.

Government often paints in broad strokes. This was especially true of Congress’s 1986 Computer Fraud and Abuse Act. The CFAA had placed harsh penalties on such specific acts as “intentionally [accessing] a computer without authorization or [exceeding] authorized access.” With such a proverbial big stick, corporations and prosecutors could stop speaking softly and start to push very hard back at anything they could paint as hacking. Though the act is relatively old, new zeal sorrounding “cyber” cases means the CFAA wave has swept up many legitimate security researchers greatly harming the cause of public disclosure

Lawsuits have stacked up against those who pursue free (libre) sharing of sensitive system information. Notable cases like US v. Riggs, Sony v. Hotz, MBTA v. Anderson et al, and others have targeted authors of vulnerability reports. This targeting hit younger authors who published in zines like Phrack and 2600 much more heavily than older, established academics who publish their findings in computer and security journals. A negative feedback loop formed: young hackers begin to see themselves as outlaws, guilty of the crimes of curiosity and exploration as the Hacker’s Manifesto so aptly puts it. The projection of this “other” status provided little motivation to cooperate and coordinate bug tracking. Instead, those responsible for developing, maintaining, and patching software continued to deal with zero-day vulnerabilities.

Has the landscape for vulnerabilities truly changed since the days of the zines, where bragging rights and information sharing trumped all? Disclosure of major vulnerabilities still occurs and cripples important systems. If there has been a fundamental shift in attitudes to vulnerability disclosure, it has come hand-in-hand with the monetization of hacking as a service.

Now a generation that grew up experiencing the unfairness of the CFAA is hitting back with innovations like managed bug bounties. Bug bounties set up an open environment for streamlining security research and exposure with a relaxed set of legal concerns. This managed approach, coupled with some monetary incentives, has led to a discernible uptick in responsible disclosure. This approach, in contrast to “full disclosure,” gives companies and researchers more time to fully consider the implications of a vulnerability and thoughtfully develop and release a patch - while still holding them accountable for timely delivery of a fix.

Even with today’s bounties and bug tracking, we still see cases that reinvigorate the old debate around responsible disclosure. Here in the Labs team, we’ve experienced similar issues firsthand: making sure disclosures are full and timely requires we balance our mission of protecting the Internet against our desire to not throw any other security teams under the bus. Are we doing a good job? What is an appropriate timeline for disclosure? Are movements to reform legislation surrounding research enough to shift public policy and opinion? Feel free to leave us your thoughts in the comments or reach out to us at labs@duosecurity.com.