<![CDATA[Decipher]]> https://decipher.sc Decipher is an independent editorial site that takes a practical approach to covering information security. Through news analysis and in-depth features, Decipher explores the impact of the latest risks and provides informative and educational material for readers curious about how security affects our world. Mon, 27 Sep 2021 00:00:00 -0400 en-us info@decipher.sc (Amy Vazquez) Copyright 2021 3600 <![CDATA[Attackers Target Critical VMware Bug]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/attackers-target-critical-vmware-bug https://duo.com/decipher/attackers-target-critical-vmware-bug Mon, 27 Sep 2021 00:00:00 -0400

Attackers are targeting the remote code execution vulnerability in VMware’s vCenter Server that the company disclosed last week, and there is large-scale scanning activity ongoing from a number of separate sources looking for vulnerable instances and working exploit code publicly available.

VMware has confirmed that attackers are actively exploiting the flaw, and security researchers have seen a variety of actors running mass scans across the Internet searching for vulnerable hosts. As of Friday, security firm Censys identified more than 3,200 potentially vulnerable vCenter Server instances exposed to the Internet, offering a broad target base for attackers. The vulnerability (CVE-2021-22005) allows a remote attacker to upload an arbitrary file without authentication, and it affects several current versions of vCenter Server.

“Understanding the new conditionals in AsyncTelemetryController makes vulnerability development trivial. You are, in effect, asking VMware’s unauthenticated analytics service (which collects telemetry data from other components of vCenter to report to VMware’s cloud) to write a file to disk in a path of your choosing. When data is sent to the telemetry service, it is first written to a log file using log4j2into the either /var/log/vmware/analytics/stage (if using the /ph-stg endpoint), or /var/log/vmware/analytics/prod (if using the /ph endpoint),” Censys CTO Derek Abdine wrote in an analysis of the bug.

“Once the file has been written, the last step is to find an external mechanism that will execute the data contained in the file. This is not difficult, as there are very well known locations in Linux-based operating systems that will read a file with any extension and execute its contents. Censys has confirmed execution, but will not release this last step to give defenders a bit more time to patch.”

The vulnerability affects versions 6.7 and 7.0 of vCenter Server, as well as versions 3.x and 4.x of Cloud Foundation. For organizations that cannot deploy the fixed version of the software immediately, VMware has published a workaround to mitigate the vulnerability.

The ingoing attacks against the vulnerability prompted the Cybersecurity and Infrastructure Security Agency to issue an advisory urging organizations to update as soon as possible.

“Due to the availability of exploit code, CISA expects widespread exploitation of this vulnerability,” the advisory says.

]]>
<![CDATA[FISMA Update Could Boost CISA's Authority]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/fisma-update-could-boost-cisas-authority https://duo.com/decipher/fisma-update-could-boost-cisas-authority Fri, 24 Sep 2021 00:00:00 -0400

Nearly 20 years after it was first passed, the Federal Information Security Modernization Act is on deck for a possible upgrade, and the country’s top cybersecurity officials say it can’t come soon enough.

The 2002 FISMA legislation was meant to bring the government into the information age by setting up a series of requirements for agencies, including maintaining a current asset inventory, doing risk assessments, and developing and implementing security programs. Even in 2002, much of that was relatively basic work, but the federal government does not often move first or fast on technology. Congress updated FISMA in 2014 in an effort to deal with the quickly changing attack landscape, but the intervening seven years have seen a dramatic increase in both attack volume and complexity, and federal cybersecurity leaders told Senate lawmakers Thursday that another update is sorely needed.

“FISMA is outdated and the status quo is clearly not working. We should shift from box-checking to a culture of true risk assessments,” Jen Easterly, director of the Cybersecurity and Infrastructure Security Agency (CISA), said during a hearing of the Senate Homeland Security and Governmental Affairs committee Thursday.

An update for FISMA is in fact in the works. The top two members of the committee, Sen. Gary Peters (D-Mich.) and Sen. Rob Portman (R-Ohio), are developing new legislation designed to both update the technological requirements for agencies and solidify the authority of CISA, which did not yet exist when FISMA was last amended. CISA is the lead cybersecurity agency for the federal government and also has authority over critical infrastructure, but there are many other federal agencies with defensive, investigative, or other cybersecurity responsibilities, including the FBI and other components of the Department of Homeland Security. Easterly said any update to FISMA should make clear what authorities and responsibilities her agency has.

“With regard to FISMA, any update should codify CISA’s operational role and hold departments accountable for the investments they make in their teams,” she said.

“And we have to move from checking boxes to real operational risk management.”

“FISMA is outdated and the status quo is clearly not working."

In addition to the FISMA update, Senate lawmakers also are working on legislation that would require critical infrastructure operators to report incidents such as ransomware intrusions and ransom payments to the appropriate federal authority. The goal is to give authorities as well as private sector companies a clearer and more timely picture of ongoing threats.

“We need to get reports about all flavors of cyber incidents because it’s important to be able to render assistance and analyze and share information widely. Having that information in a timely way so we can share with critical infrastructure and state and local level governments, so we can collectively raise the baseline of the cyber ecosystem. It’s incredibly important to instantiate that in legislation,” Easterly said.

The Biden administration has put a strong emphasis on cybersecurity in general and ransomware specifically, pressuring Russian leaders to stop harboring cybercrime groups, indicting alleged ransomware actors, and sanctioning organizations it says are part of the ransomware payment ecosystem. Earlier this week, the Department of the Treasury designated cryptocurrency exchange Suex, meaning it is off-limits for transactions for U.S. persons.

“SUEX has facilitated transactions involving illicit proceeds from at least eight ransomware variants. Analysis of known SUEX transactions shows that over 40% of SUEX’s known transaction history is associated with illicit actors,” the Treasury advisory says.

Disrupting the ransomware payment pipeline is a complex task, thanks to the fact that virtually all payments are made in cryptocurrency and the exchanges and processors that handle them are not based in the U.S. But federal officials said it’s possible.

“I do think it’s doable to disrupt the cryptocurrency payment system. We can essentially lock those down if we know that they’re engaged in illicit activities,” said Chris Inglis, National Cyber Director, and a former deputy director of the National Security Agency.

]]>
<![CDATA[U.S. Warns of Continued Threat from Conti Ransomware]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/u-s-warns-of-continued-threat-from-conti-ransomware https://duo.com/decipher/u-s-warns-of-continued-threat-from-conti-ransomware Thu, 23 Sep 2021 00:00:00 -0400

The Conti ransomware operators have been a menace for more than a year, with attacks against health care providers, 911 systems, and many other critical organizations being connected to Conti affiliates in the last few months. Although the group’s internal playbook leaked online a few weeks ago, Conti attacks have not slowed down and the FBI, CISA. and NSA are warning enterprises that the threat from the group continues.

Conti is one of the many ransomware-as-a-service (RaaS) operations that have sprouted up in recent years, and its affiliates have shown a willingness to target virtually any type of organization. Earlier this year, a Conti affiliate compromised Ireland’s Health Service Executive, taking down much of the service’s infrastructure and forcing the cancellation of appointments and massive care delays. The Irish police later seized some Conti infrastructure, but like most RaaS groups, it has a distributed operation that was not completely disrupted by the action.

On Wednesday, the three top federal government agencies that handle cybersecurity issues published a warning about the continued threat from the Conti operation, saying that they have seen more than 400 Conti attacks recently. Conti affiliates use a variety of techniques for gaining initial access to target networks, including phishing campaigns, stolen credentials, or installation through other malware families. Once inside a network, the actors often use legitimate tools for lateral movement and network inventory.

“Conti actors are known to exploit legitimate remote monitoring and management software and remote desktop software as backdoors to maintain persistence on victim networks. The actors use tools already available on the victim network—and, as needed, add additional tools, such as Windows Sysinternals and Mimikatz—to obtain users’ hashes and clear-text credentials, which enable the actors to escalate privileges within a domain and perform other post-exploitation and lateral movement tasks. In some cases, the actors also use TrickBot malware to carry out post-exploitation tasks,” the advisory from CISA, NSA, and FBI says.

The Conti playbook that appeared online a few weeks ago includes quite a bit of detail about the affiliates’ responsibilities, tools to use, and how to find administrator access once they’re on a new network. A translation of the playbook from Russian to English performed by Cisco Talos researchers reveals that the group has a variety of tools and techniques at its disposal.

“Conti actors are known to exploit legitimate remote monitoring and management software and remote desktop software as backdoors."

“The adversaries also included instructions on CVE-2020-1472 Zerologon exploitation in Cobalt Strike. In a previous Ryuk ransomware engagement from Q2 2021, we observed the adversary access several additional resources within that environment and employ a privilege escalation exploit leveraging CVE-2020-1472 to impersonate a domain controller,” Talos researchers said.

“Talos first started observing Ryuk adversaries using the Zerologon privilege-escalation vulnerability in September 2020 and continued updating their attacks on the health care and public health sectors in October. Some researchers have described Conti as the successor to Ryuk.”

Conti affiliates also use publicly available legitimate tools in their operations, including Cobalt Strike and others.

“Conti actors often use the open-source Rclone command line program for data exfiltration. After the actors steal and encrypt the victim's sensitive data, they employ a double extortion technique in which they demand the victim pay a ransom for the release of the encrypted data and threaten the victim with public release of the data if the ransom is not paid,” the advisory says.

]]>
<![CDATA[VMware Fixes Critical Flaw in vCenter Server]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/vmware-fixes-critical-flaw-in-vcenter-server https://duo.com/decipher/vmware-fixes-critical-flaw-in-vcenter-server Wed, 22 Sep 2021 00:00:00 -0400

VMware has released fixes for several serious vulnerabilities in its vCenter Server, including a critical arbitrary file upload flaw that attackers can exploit remotely with little effort.

The bug (CVE-2021-22005) is present in versions 6.5, 6.7, and 7.0 of vCenter Server, and VMware is encouraging customers running affected versions to update as soon as they can.

“The ramifications of this vulnerability are serious and it is a matter of time – likely minutes after the disclosure – before working exploits are publicly available,” the advisory says.

“With the threat of ransomware looming nowadays the safest stance is to assume that an attacker may already have control of a desktop and a user account through the use of techniques like phishing or spearphishing, and act accordingly. This means the attacker may already be able to reach vCenter Server from inside a corporate firewall, and time is of the essence.”

In order to exploit this vulnerability, an attacker would only need the ability to reach a specific port on the affected server.

“A malicious actor with network access to port 443 on vCenter Server may exploit this issue to execute code on vCenter Server by uploading a specially crafted file,” the advisory says.

In addition to this vulnerability, VMware also released patches for more than a dozen other flaws, including a local privilege escalation in vCenter Server.

“A malicious actor with non-administrative user access on vCenter Server host may exploit this issue to escalate privileges to Administrator on the vSphere Client (HTML5) or vCenter Server vSphere Web Client (FLEX/Flash),” the VMware advisory says.

VMware also released fixes for several other privilege escalation, denial of service, and information disclosure bugs in vCenter Server and Cloud Foundation. Organizations running affected versions of the products should upgrade as soon as practical.

]]>
<![CDATA[Azure OMIGOD Flaw Under Attack]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/azure-omigod-flaw-under-attack https://duo.com/decipher/azure-omigod-flaw-under-attack Mon, 20 Sep 2021 00:00:00 -0400

Several separate attack groups are attempting to exploit the remote-code execution vulnerability in the Open Management Infrastructure (OMI) Framework agent for Azure that was disclosed last week, including a group that is installing the Mirai botnet malware on compromised hosts.

The OMI extension is installed in the background with many Azure services, including Open Management Suite, Azure Insights, and Azure Automation, and is used for configuration management on Linux and Unix systems. Researchers at Wiz discovered the vulnerability (CVE-2021-38647), along with three local privilege escalation bugs, and disclosed them to Microsoft, which issued updates for them on Sept. 14. But attackers took notice quickly, and within days, a number of proof-of-concept exploits were available and actors were looking for vulnerable hosts.

“To date we have seen several active exploitation attempts ranging from basic host enumeration (running uname, id, ps commands) to attempts to install a crypto currency miner or file share. We have also seen others in the community report similar behavior to include installs of the Mirai botnet. While many of the attackers are looking for port 5986, we are also seeing attacks on port 1270,” Russell McDonald of the Microsoft Threat Intelligence Center said.

“Due to the number of easily adaptable proof of concept exploits available and the volume of reconnaissance-type attacks, we are anticipating an increase in the number of effects-type attacks (coin miners, bot installation, etc.). In a nutshell, anyone with access to an endpoint running a vulnerable version (less than 1.6.8.1) of the OMI agent can execute arbitrary commands over an HTTP request without an authorization header.”

The OMI flaw is rated critical, and it’s all the more serious due to the fact that OMI is installed automatically, and mostly silently, with so many Azure VMs. Exploitation of the bug is not complicated, either.

“Thanks to the combination of a simple conditional statement coding mistake and an uninitialized authentication struct, any request without an Authorization header has its privileges default to uid=0, gid=0, which is root. This vulnerability allows for remote takeover when OMI exposes the HTTPS management port externally (5986/5985/1270). This is in fact the default configuration when installed standalone and in Azure Configuration Management or System Center Operations Manager (SCOM),” The Wiz Research Team wrote in the explanation of the vulnerability.

Two days after Microsoft released the advisory for the vulnerability, researchers saw the operators of the Mirai botnet attempting to exploit the bug, although the attempts were failing at that time because they had implemented the exploit incorrectly. But that has changed since.

“Oh Mirai fixed their binary, it now supports proper OMIGOD exploitation. Given Mirai can enter networks and spread laterally via multiple vulns, this might be problematic,” researchers Kevin Beaumont said on Twitter Friday.

GeyNoise, which monitors scanning and attack traffic, has identified numerous malicious hosts trying to exploit this vulnerability to install Mirai, as well.

]]>
<![CDATA[New Turla Backdoor Identified]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/new-turla-backdoor-identified https://duo.com/decipher/new-turla-backdoor-identified Mon, 20 Sep 2021 00:00:00 -0400

The Turla cyberespionage group, which has been in operation for the better part of a quarter century and is connected to the infamous Moonlight Maze attack on the Pentagon and other agencies, recently has been deploying a small previously undodcumented backdoor against targets in the United States, Germany, and Afghanistan.

The backdoor, kmnown as TinyTurla, is quite simple and researchers believe it likely serves as a backup persistence mechanism for the group to maintain access to compromised machines. Researchers with Cisco Talos discovered the backdoor and believe it has been in use since at least last year. Most recently, the backdoor has been deployed against targets in Afghanistan during the turmoil surrounding the shift in power after the U.S. military withdrawal. Talos discovered that the backdoor was using some infrastructure that was known to have been used in other Turla operations in the past.

“Based on forensic evidence and the fact that it was using formerly attributed infrastructure from the Penguin Turla malware, Talos assesses with moderate confidence that this was used to target the previous Afghan government,” Talos researchers wrote in a new analysis of the backdoor.

Turla, which is also known by a long list of other names, including Snake and Uroburos, is one of the more venerable and prolific known APT groups and is connected to many high-level operations during the last two decades. The most well-known of those intrusions is the Moonlight Maze operation, which involved compromises of NASA, the Pentagon, the Department of Energy, and other agencies in the late 1990s. That operation involved the theft of military data, maps, technical documents, and kicked off a massive government investigation that lasted several years. Researchers have not directly attributed Moonlight Maze to Turla, but the connective tissue is strong.

It wasn’t until much later, in 2014, that Turla was properly identified by researchers and its more recent operations and tools were exposed. The group has a broad array of attack tools at its disposal and is known to use zero day exploits in some of its operations. Turla is a Russian group and it often operates in alignment with the Russian government’s political interests and objectives. The group is highly focused on espionage activity and has significant financial and technical resources at its disposal.

"They used the same infrastructure as they used for other attacks that have been clearly attributed to their Penguin Turla Infrastructure."

Many of Turla’s malicious tools are known to researchers, but the Windows backdoor that Talos discovered had not been documented previously. It masquerades as a Windows service and would not necessarily be simple for defenders to identify as malicious.

“The adversaries installed the backdoor as a service on the infected machine. They attempted to operate under the radar by naming the service ‘Windows Time Service,’ like the existing Windows service. The backdoor can upload and execute files or exfiltrate files from the infected system. In our review of this malware, the backdoor contacted the command and control (C2) server via an HTTPS encrypted channel every five seconds to check if there were new commands from the operator,” Talos said.

Although Turla is near the top of the heap of APT groups, it comprises humans, and humans get lazy and make mistakes sometimes. And those mistakes can help researchers track their activities and identify their operations, as Talos did in this case.

“During their campaigns, they are often using and re-using compromised servers for their operations, which they access via SSH, often protected by TOR. One public reason why we attributed this backdoor to Turla is the fact that they used the same infrastructure as they used for other attacks that have been clearly attributed to their Penguin Turla Infrastructure,” Talos said.

]]>
<![CDATA[MSHTML Zero Day Exploits Used Shared Infrastructure With Ransomware Group]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/mshtml-zero-day-exploits-used-shared-infrastructure-with-ransomware-group https://duo.com/decipher/mshtml-zero-day-exploits-used-shared-infrastructure-with-ransomware-group Thu, 16 Sep 2021 00:00:00 -0400

The MSHTML zero day in Windows that Microsoft patched this week has been in use by attackers since at least late August, and research from Microsoft and RiskIQ shows that some of the infrastructure used in the campaigns exploiting the bug has also been used by a highly active ransomware group.

The first indications of the new Windows vulnerability (CVE-2021-40444) surfaced on Aug. 21 when a Mandiant researcher posted some information about a malicious Office document. Microsoft researchers looked at the document and found some indicators that it was abusing a previously unknown vulnerability. A couple weeks later, on Sept. 7, Microsoft issued an advisory about the flaw and warned customers that attackers were already exploiting it. Within days, exploits for the flaw were circulating publicly and in private forums.

“As a routine in these instances, Microsoft was working to ensure that the detections described in the advisory would be in place and a patch would be available before public disclosure. During the same time, a third-party researcher reported a sample to Microsoft from the same campaign originally shared by Mandiant. This sample was publicly disclosed on September 8. We observed a rise in exploitation attempts within 24 hours,” Microsoft’s Threat Intelligence Center said in a new analysis of the exploitation activity against the vulnerability.

The MSTIC researchers found that some of the infrastructure used in the initial attacks shared some characteristics and overlapped with infrastructure used in separate attacks that delivered Trickbot and BazaLoader malware. Those attacks are associated with a group that Mandiant calls UNC1878, which is known to use several different ransomware strains, as well. Researchers at RiskIQ, which Microsoft acquired recently, also found indications that the same infrastructure was in use for the ransomware campaigns and exploitation of CVE-2021-40444.

“RiskIQ’s Team Atlas assesses with high confidence that the operators behind the deployment of the zero-day exploit and Cobalt Strike BEACON implants are using infrastructure that shares historical connections to a large, loosely-related criminal enterprise given the names WIZARD SPIDER (CrowdStrike), UNC1878 (FireEye), and RYUK (Public). These groups are known to use the Conti and Ryuk malware families in targeted, so-called Big-Game Hunting ransomware campaigns aimed at large enterprises,” the company said.

“The association of a zero-day exploit with a ransomware group, however remote, is troubling."

It’s quite unusual for a ransomware group to use a zero day in its operations, as most of those groups rely on other, much simpler methods for initial access to networks. Some groups will buy initial access from other attackers who have previously compromised an organization, while others will employ simple phishing attacks that lead to credential theft or direct deployment of the ransomware.

“The association of a zero-day exploit with a ransomware group, however remote, is troubling. It suggests either that turnkey tools like zero-day exploits have found their way into the already robust ransomware-as-a-service (RaaS) ecosystem or that the more operationally sophisticated groups engaged in traditional, government-backed espionage are using criminally controlled infrastructure to misdirect and impede attribution,” RiskIQ’s researchers said.

“Despite the historical connections, we cannot say with confidence that the threat actor behind the zero-day campaign is part of WIZARD SPIDER or its affiliates, or is even a criminal actor at all, though it is possible. If the threat actors were part of these groups, it means they almost surely purchased the zero-day exploit from a third party because they have not previously shown the ability to develop exploit chains of this complexity.”

There are now several different attack groups using exploits for the vulnerability in active attacks, so organizations should deploy the patch MIcrosoft released this week as quickly as possible.

]]>
<![CDATA[Re-Deciphering Hackers]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/re-deciphering-hackers https://duo.com/decipher/re-deciphering-hackers Wed, 15 Sep 2021 00:00:00 -0400

]]>
<![CDATA[Apple Patches Two iOS Bugs Exploited in the Wild]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/apple-patches-two-ios-bugs-exploited-in-the-wild https://duo.com/decipher/apple-patches-two-ios-bugs-exploited-in-the-wild Tue, 14 Sep 2021 00:00:00 -0400

Apple has released a security only update for iOS and macOS that fixes two vulnerabilities, both of which have been exploited by attackers.

One of the flaws was discovered by researchers at Citizen Lab recently while they were examining the iPhone of a Saudi Arabian activist. The researchers found that the phone had been compromised using an exploit for a previously unknown bug in the CoreGraphics component of iOS, and further found that the exploit did not require any user interaction. That vulnerability (CVE-2021-30860) is an integer overflow, and Citizen Lab’s researchers identified some overlap between artifacts of the exploit used for that bug and previous operations associated with the installation of NSO Group’s Pegasus spyware tool.

The exploit has been in use since at least February, and Citizen Lab has termed it FORCEDENTRY. As part of the forensic examination of the compromised phone, the researchers found 27 identical files, all with the .gif extension.

“Because the format of the files matched two types of crashes we had observed on another phone when it was hacked with Pegasus, we suspected that the “.gif” files might contain parts of what we are calling the FORCEDENTRY exploit chain,” the Citizen Lab report on the bug says.

“The spyware installed by the FORCEDENTRY exploit exhibited a forensic artifact that we call CASCADEFAIL, which is a bug whereby evidence is incompletely deleted from the phone’s DataUsage.sqlite file. In CASCADEFAIL, an entry from the file’s ZPROCESS table is deleted, but not entries in the ZLIVEUSAGE table that refer to the deleted ZPROCESS entry. We have only ever seen this type of incomplete deletion associated with NSO Group’s Pegasus spyware, and we believe that the bug is distinctive enough to point back to NSO.”

Pegasus is a tool sold to government and law enforcement agencies to enable remote surveillance of electronic devices, specifically mobile devices. Researchers have identified several zero day exploits, including other zero-click exploits, that customers of NSO Group have used to install Pegasus, and the tool has been found on the phones of activists and dissidents in many countries.

Along with the vulnerability identified by Citizen Lab, Apple also patched a separate use-after-free bug in WebKit that also has been exploited in the wild.

“Processing maliciously crafted web content may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited,” the Apple advisory says.

Both vulnerabilities are fixed in iOS 14.8 and macOS Big Sur 11.6.

]]>
<![CDATA[Decipher Podcast: Carolina Terrazas]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/decipher-podcast-carolina-terrazas https://duo.com/decipher/decipher-podcast-carolina-terrazas Mon, 13 Sep 2021 00:00:00 -0400

]]>
<![CDATA[REvil Ransomware Group Reemerges]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/revil-ransomware-group-reemerges https://duo.com/decipher/revil-ransomware-group-reemerges Mon, 13 Sep 2021 00:00:00 -0400

The notorious REvil ransomware group, which went dark in July, has reemerged on underground forums and is attempting to reestablish ties with its former affiliates to begin operations again.

REvil is a ransomware-as-a-service group that had been operating for several years before drawing a large amount of unwanted attention this summer with the intrusion at software makers Kaseya. That attack resulted in nearly 1,500 companies that use Kaseya’s VSA remote administration service being infected with ransomware and Kaseya taking the service offline for several days. The incident caught the attention of not just law enforcement, but also the Biden administration. The REvil group is believed to operate from Russia, and President Biden raised the issue of ransomware and cybercrime groups in Russia with Russian President Vladimir Putin in talks after the Kaseya incident.

Following the Kaseya attack, REvil’s operators dropped off the underground forums where they communicated with affiliates and the group’s infrastructure was taken offline. But last week, researchers discovered posts from apparent REvil operators on Exploit, a well-known forum, explaining that the group was back.

“For all intents and purposes, it appears that REvil is fully operational after its hiatus. Evidence also points to the ransomware group making efforts to mend fences with former affiliates who have expressed unhappiness with the group’s disappearance,” researchers at Flashpoint wrote in an analysis of the posts.

“Two days prior, on September 7, the REvil leaks blog known as Happy Blog, went back online after a two-month hiatus. REvil is also allegedly back on Exploit under a new alias, ‘REvil’.”

The Kaseya incident brought quite a bit of attention to the ransomware problem in general and REvil’s operations specifically. Law enforcement agencies in the United States and Europe have been focusing intently on disrupting ransomware groups, their infrastructure, and their payment ecosystems for some time. But the Kaseya intrusion, coupled with the ransomware attack on Colonial Pipeline in May, led to a new level of interest from the Biden administration, which has formed a ransomware task force and created a new Joint Cyber Defense Collaborative (JCDC) to share resources with and cooperate with private sector companies to combat ransomware.

But the two main issues that make ransomware groups such as REvil successful still remain: the payment ecosystem and the political cover they receive in countries such as Russia and North Korea. Addressing those issues will take time and solutions in the technical and policy arenas, both of which are difficult and complicated. How that plays out remains to be seen, but for the time being the reemergence of REvil brings another player onto an already crowded board.

]]>
<![CDATA[Exploits Circulating for Windows MSHTML Zero Day]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/exploits-circulating-for-windows-mshtml-zero-day https://duo.com/decipher/exploits-circulating-for-windows-mshtml-zero-day Fri, 10 Sep 2021 00:00:00 -0400

With no patch yet available for the recently disclosed zero day in the Windows MSHTML engine, attackers are continuing to take advantage of the bug in targeted attacks, while researchers have discovered that the recommended mitigation doesn’t protect against all types of attacks.

Microsoft released an advisory detailing the vulnerability (CVE-2021-40444) on Tuesday and warned that exploitation had already been detected. The vulnerability lies in the MSHTML engine and it affects all modern versions of Windows. The most likely exploitation vector is with a malicious Office document delivered via email, but there are other scenarios, as well.

“An attacker could craft a malicious ActiveX control to be used by a Microsoft Office document that hosts the browser rendering engine. The attacker would then have to convince the user to open the malicious document. Users whose accounts are configured to have fewer user rights on the system could be less impacted than users who operate with administrative user rights,” the Microsoft advisory says.

Researcher Rich Warren of NCC Group also developed a technique for exploiting the bug using a rich-text format file in Windows Explorer.

“This indicates it can be exploited even without opening the file and this invalidates Microsoft’s workaround mitigation,” said John Hammond of Huntress in an analysis of the activity.

“For Office files, no traditional VBA macros are needed for this attack. Any URL beginning with mshtml:http will download a file passed to the MSHTML parser engine, and potentially any way an Office document can call out to a URL can be used to exploit CVE-2021-40444.”

And, though the original exploit activity can be traced back as far as August, the exploits had remained private until Friday, when some proof-of-concept exploits began circulating on forums. Some offensive security researchers have begun sharing their own exploits, as well. Microsoft has not said when it plans to release a patch for the vulnerability, but the company’s next scheduled Patch Tuesday release is Sept. 14.

Microsoft’s main workaround suggestion has been to disable all ActiveX controls in Internet Explorer, but researchers have demonstrated that exploitation does not necessarily rely on ActiveX.

]]>
<![CDATA[Attackers Exploiting Critical Flaw in Zoho Password Management Tool]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/attackers-exploiting-critical-flaw-in-zoho-password-management-tool https://duo.com/decipher/attackers-exploiting-critical-flaw-in-zoho-password-management-tool Thu, 09 Sep 2021 00:00:00 -0400

ManageEngine, the maker of a number of products for managing Active Directory deployments, said attackers are actively exploiting an authentication bypass flaw in its ADSelfService Plus tool, which is used for password resets and identity management.

The vulnerability (CVE-2021-40539) affects ADSelfService up to build 6113 and the company said it has seen evidence that attackers are taking advantage of it already. Exploiting the vulnerability does not take much effort and MangeEngine, a division of Zoho, is encouraging customers to update to the latest build, 6114, to protect themselves.

“This vulnerability allows an attacker to gain unauthorized access to the product through REST API endpoints by sending a specially crafted request. This would allow the attacker to carry out subsequent attacks resulting in RCE,” the advisory says.

“This is a critical issue. We are noticing indications of this vulnerability being exploited.”

ADSelfService Plus is a management application that offers a variety of identity management and password management capabilities, including self-service password reset, SSO, policy enforcement for multi-factor authentication, and other features. The app is used by quite a number of enterprises, and ManageEngine lists IBM, eBay, and Northrop Grumman among the customers on its site.

Organizations that suspect their installations of ADSelfService Plus may have been affected can look for a couple of specific entries in the app’s logs: /RestAPI/LogonCustomization or /RestAPI/Connection. The presence of either of those entries indicates a compromise. Likewise, the presence of service.cer in \ManageEngine\ADSelfService Plus\bin folder or ReportGenerate.jsp in \ManageEngine\ADSelfService Plus\help\admin-guide\Reports folder is evidence of compromise.

]]>
<![CDATA[Decipher Podcast: Amélie Koran]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/decipher-podcast-amelie-koran https://duo.com/decipher/decipher-podcast-amelie-koran Tue, 07 Sep 2021 00:00:00 -0400

]]>
<![CDATA[Microsoft Warns of Attacks on Windows MSHTML Zero Day]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/microsoft-warns-of-attacks-on-windows-mshtml-zero-day https://duo.com/decipher/microsoft-warns-of-attacks-on-windows-mshtml-zero-day Tue, 07 Sep 2021 00:00:00 -0400

Microsoft is warning customers about a newly identified vulnerability in the MSHTML component in Windows that attackers are actively exploiting in targeted attacks.

The vulnerability (CVE-2021-40444) affects most of the current versions of Windows and Windows Server and it does not require any privileges for exploitation. MIcrosoft has published some workarounds and mitigations for the bug, but there is no patch available yet.

“Microsoft is investigating reports of a remote code execution vulnerability in MSHTML that affects Microsoft Windows. Microsoft is aware of targeted attacks that attempt to exploit this vulnerability by using specially-crafted Microsoft Office documents,” the advisory says.

“An attacker could craft a malicious ActiveX control to be used by a Microsoft Office document that hosts the browser rendering engine. The attacker would then have to convince the user to open the malicious document. Users whose accounts are configured to have fewer user rights on the system could be less impacted than users who operate with administrative user rights.”

This flaw looks like a prime target for spear phishing attacks, as an attacker could simply attach the malicious Office document to a crafted email and just wait for the victim to open it. Microsoft recommends a few workarounds to help defend against attacks on this vulnerability, including disabling ActiveX controls.

“Disabling the installation of all ActiveX controls in Internet Explorer mitigates this attack. This can be accomplished for all sites by updating the registry. Previously-installed ActiveX controls will continue to run, but do not expose this vulnerability,” the advisory says.

The good news is that there are some default protections in Windows tha mitigate this vulnerability. The default behavior for Microsoft Office is to open documents from the Internet in Protected View, which prevents the currently known attack from succeeding.

Microsoft’s next scheduled patch release is Sept. 14, but the company could release an out-of-band patch before then.

]]>
<![CDATA[Slow Uptake on Critical Confluence Update]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/slow-uptake-on-critical-confluence-update https://duo.com/decipher/slow-uptake-on-critical-confluence-update Fri, 03 Sep 2021 00:00:00 -0400

Last week, Atlassian released details about a critical vulnerability in its popular Confluence enterprise wiki service, urging customers to upgrade as soon as possible because the bug could be used for arbitrary code execution. However, it doesn’t appear that many organizations have taken the warning seriously.

The vulnerability (CVE-2021-26084) affects all versions of Confluence Server and Data Center prior to 6.13.23, 7.11.6, 7.12.5, 7.13.0, or 7.4.11, and it’s an issue in the way the Object-Graph Navigation Language interprets some HTML fields. A security researcher named Jacob Benny discovered and disclosed the flaw to Atlassian, which has released updated versions for all of the affected products.

But new data collected by Censys shows tha only a small fraction of the vulnerable instances have been updated since Atlassian published its advisory on Aug. 25. A few days before the advisory came out, Censys 14,637 vulnerable instances online, and on Wednesday that number had only dropped to 12,876. That’s not much of a change in a week, particularly given the critical nature of the flaw.

“There is no way to put this lightly: this is bad."

“An attacker can leverage this vulnerability to execute any command with the same permissions as the user-id running the service. An attacker can then use this access to gain elevated administrative permissions if the host has unpatched local vulnerabilities,” Mark Ellzey, a secior security researcher at Censys, wrote in an analysis of the data the company collected on the bug.

“There is no way to put this lightly: this is bad. Initially, Atlassian stated this was only exploitable if a user had a valid account on the system; this was found to be incorrect and the advisory was updated today to reflect the new information. It’s only a matter of time before we start seeing active exploitation in the wild as there have already been working exploits found scattered about.”

Not only have the details in the Atlassian advisory been public for more than a week, but so have the details from the researcher himself, who has published walkthroughs of the bug online. Enterprises running on-premises Confluence instances should move to the most recent release as soon as is practical.

]]>
<![CDATA['Drive It Like You Stole It': When Bug Bounties Went Boom, Part Three]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/you-got-to-drive-it-like-you-stole-it-when-bug-bounties-went-boom-part-three https://duo.com/decipher/you-got-to-drive-it-like-you-stole-it-when-bug-bounties-went-boom-part-three Wed, 01 Sep 2021 05:00:00 -0400

After more than a decade of evolution and innovation, bug bounty programs had proven to be invaluable tools for organizations in many industries to help improve their security with the help of outsiders. During Barack Obama's second term, some administration officials began looking at bounties as a potential way to jump-start the effort to upgrade federal government's security programs. The idea was a radical one, given the government's institutional resistance to change, not to mention the inherent risks of allowing outside hackers to look for bugs in target sites. So they decided to start slowly. By hacking the Pentagon.

Note: All job titles and positions reflect the person's role at the time of the events.

Read part one and part two.

Alex Romero (CISO of the Defense Media Activity): I tried to recreate the potential of the crowd to come after hard assets, because I tried literally every tool that I could find, every open source tool that I could get my hands on to hack myself. So I actually tried, I think, seven months before we officially did the whole Hack the Pentagon thing. The Defense Digital Service wasn't really formed yet, but I reached out to my lawyers and asked, "Can we use this bug bounty thing? It seems like a good idea." And they're like, "Oh, hell no. Horrible idea." Like, "You're not going to invite hackers to come test our sites." So, how can you get after that problem? Well, I mean, thinking like a hacker. When I was in the Marines, we fought and trained like our adversaries did. But we weren't doing the same thing for our networks. So, that always stuck with me. We could do better. We have to do better.

So I met Chris Lynch, met Reina Staley, Corey Harrison. Those were the tri-founders of DDS. Then from that point on, once we realized, "Okay, this is a good idea. How do we get everybody on board? Who are all the stakeholders?" That was really one of the hardest parts. Yes, I might own the systems, but the network on DoD is owned by many. And what other issues could there be? We had done all sorts of pen tests and red teams, because we didn't want to get embarrassed by anything that the researchers found. We thought we were pretty good, but it turns out we weren't.

Katie Moussouris (Chief Policy Officer, HackerOne): I was giving a guest lecture at a joint symposium that was Harvard Kennedy School and MIT Sloan School. So, first of all, it was a career highlight for me because I stayed in the Charles Hotel nearby. And that was actually a hotel that I used to use to clean myself up as a homeless teenager. At the lecture, I was spotted by Michael Sulmeyer (director for plans and operations for cyber policy in the Defense Department) and he is now serving in a cyber position somewhere in the White House. And he was sitting in on that lecture and he said, "Have you ever been to the Pentagon?" And I was like, "No," and he said, "Well, would you come brief the Pentagon if I invited you?" and I said, "Of course I would."

And I briefed an audience of various people that Michael pulled together and among them was Lisa Wiswell and she ended up taking over his position when he moved into a different role. So I showed Lisa and Charley Snyder, who also was in a policy role in the Pentagon, around ShmooCon a little bit. And then we talked on and off, beginning at that point for years. So it was conversations over several years where anytime I was in town in DC, I would stop over at the Pentagon for a visit and talk to them more, answer their questions about scope, scale, and preparation. Once she called and said, "Defense Digital Service launched, we want to pursue a bug bounty as one of the first big major public things that we do." I was like, "All right, well, let's just make it so that you don't wreck yourself." So what Lisa's ideas were, and I'm sure that RoRo had some of these as well, but Lisa was the one who basically rallied all of the different branches of our military that have offense capabilities. So she basically was like, "You know what? Katie says that we should test our stuff first. Why don't we do it with all the various cyber commands across the military?" And so there was a lot of bureaucracy hacking that she did.

Alex Romero: So fast forwarding past that moment in time to when we actually were told to make it happen, because now these folks were essentially operating on behalf of the Secretary of Defense, and that comes with all sorts of additional authorities. If the SecDef says to do it, then you get it done. Yeah, so the conversations are difficult, and actually that's where Katie was in a few of those conversations. We actually used some of her reference videos when we were trying to educate people on what a bug bounty was. And then she shows up to one of the conversations. I don't remember what color her hair was at the time, but definitely not fitting in either. She came ready, I'll put it that way, to have that discussion. But I think a lot of the work had already been done by the team to prep the battlefield, if you will.

I would put a lot of this and a lot of the success on Lisa Wiswell, and Charley. They did the bureaucracy hacking that allowed us to get to the point where we could actually run the challenge. Once we actually kicked off the challenge, the ball was in my court in terms of making sure it was a success. Because the untold story was, a lot of my leadership was not happy with this. They basically were like, "Well, participate, but it doesn't necessarily need to be a success." I was not of the same opinion. So I reached out to Chris Lynch at the time and I was like, "Well, I personally think this is a valuable tool, and I would like it to be a success." So there were various folks, not to put anybody down, but they were afraid of what could happen if we just invite researchers to come hack on our stuff, and they can pivot and find other ways in. But that's exactly what you want them to do. So, it's a completely different way of thinking.

"Hey, we're going to hack your shit. For real, we are. Here's the memo that says so."

Lisa Wiswell Coe (program manager, Hack the Pentagon): Really, if the first people you're talking to aren't the lawyers, then you've got a problem because at the end of the day, all this is, is a legal mechanism where a legal mechanism doesn't exist at present. It's essentially hacking a law or a suite of laws in order to find a loophole to allow people to do something, to give them authorization to do something that is otherwise a felony. I think in the first several sets of not just the Vulnerability Disclosure Policy discussions that were happening in the public space, but also specific to the assets that were selected for some of the initial bug bounties, you can see that it was intentional to try to make sure that the things that citizens rely on, not just me as a policy person in DoD, but the things that the world relies upon are the things that we're taking very seriously. And yet, the implication might be that they're not secure now. But, the ground truth is we're not, and it's no longer something we're going to disguise from you. We're not going to keep our heads in the sand any longer.

Alex Rice: That initial project was pretty successful for where they were at. Not just in terms of the assessment itself, but in convincing folks that the department submission could benefit from hackers on the outside attributing to it in a meaningful way. One other note that's worth touching on with this is that in that process the team did not have a vulnerability disclosure program at the time. And the momentum internally to establish a VDP wasn't there. So as part of our proposal to them to run their proof of concept we added the ability for them to host a VDP program at the same time to make it as easy as possible for them to have a VDP program set up to handle that ongoing relationship. And what if somebody finds something outside of the challenge? What do we do with it? We could run this narrowly scoped by bounty proof of concept, but what if somebody finds something that's in a different system? What do we do with it? So we were kind of 11th hour successful at making the case for a VDP program to get spun up as an add-on to the Hack the Pentagon program. So it was originally just a bug bounty proof of concepts, ended up being a bug bounty proof of concept, which you get its goals, but also it was the inception point for the vulnerability disclosure program long-term.

Katie Moussouris: I think one of the trickiest places to get right in scoping is saying, tell us what you think the impact would be of a vulnerability. And this is where disagreements can come about with well-meaning researchers who were reading the rules and saying, okay, well with this credential that I managed to get here, or this token I managed to steal, I don't know what the impact would be unless I try to use it and then potentially pivot through their environment with it, which is usually not allowed, right? But a very technical researcher who is sort of thinking of themselves almost as a red teamer might take it too far. And I remember there were a few cases that required a lot of internal deescalation and some gentle explaining to the researcher by me and others who can speak hacker, of being like, Hey friend, listen, you're totally right. Yep. You can definitely use that to pivot on through, but please do not, and stop it now and no really stop because it's not like you're going to get more money out of it, really we're just saying stop." And luckily nothing bad happened and no researchers got sued or arrested or anything for hacking the Pentagon too much.

Lisa Wiswell Coe: I spoke enough of the language. I had been around the hacker community for probably 10 years prior, having spent a lot of years at DARPA where we had hackers working on particular programs that were like that, but for other objectives. And so, we were out at all of the hacker conferences at the beginning part of the year to try to help them understand that we were really serious about that. It wasn't just some sort of nebulous thing that people that didn't understand them had come up with. And at some point, we decided that it was probably a better place for me if I was detailed to Defense Digital Service because they had the unique authorities to be able to figure out how the hell do we pay for a bug bounty. Who can come up with a quick contract vehicle for them? And some of how we did that, I just drafted a memo for SecDef to sign that said, "We're going to do this and you're supposed to lead, and you got an organization act. Work with her to be able to achieve that, and achieve it successfully." It is really necessary if you're going to show up at the Defense Media Activity and say, "Hey, we're going to hack your shit. For real, we are. Here's the memo that says so."

Alex Romero: This led to conversations around, "Well, we really, really, really need to have a Vulnerability Disclosure Policy for the DoD." That was a huge, gaping hole in our defenses in a sense, because whereas in the physical world, especially after 9/11, we had the See Something, Say Something motto and we invited people to tell us about our faults, the same didn't apply as soon as you were talking about bits and bytes. We, in fact, would invite people to not look at our stuff and have all these very scary banners. "If you don't belong here, go away." Well, that's not how bots on the net think, or that's not how researchers or actual adversaries think. If it's possible, it's going to happen. You just have to think about these things differently.So having a place, a safe place, a safe harbor to protect the researchers so that if they felt that they found something worthwhile to send our way that was bad, they could tell us safely. So to date, we've received over, I want to say, close to 25,000 reports from researchers on vulnerabilities within the DoD of all sorts. It's been a hugely successful program. So I'm a huge proponent of every organization having a security.text file, a security@ email address. Whatever method is best for the researchers, think about it from their perspective. They're just trying to tell you and do the right thing. Don't make their life hard.

"If the first people you're talking to aren't the lawyers, then you've got a problem."

Lisa Wiswell Coe: I had this mentality that you got to drive it like you stole it. If you're really going to affect change and essentially throw the bowling ball through the window of how we do things, you've got to put your money where your mouth is, otherwise you're just part of the bureaucracy and part of the problem. So if you know how the bureaucracy works, you can find ways to cut corners or to hack it. Break down assumptions and get out of that loop.

Alex Rice: The risks were all around perception and unintended consequences to the hackers. Like, are people going to demonize the hackers? Are they going to be excited to receive the vulnerability reports? Are they going to celebrate it after the fact? Are they going to try to cover it up and not be open about the hackers, found anything? That was one set of it. Is the culture going to be receptive to feedback from hackers. There was a lot of risks around that. And a big chunk of what we focused on was, how do we manage the perception that hackers are good folks, they're here to help.

Katie Moussouris: I think it was definitely a group effort, the folks inside the Pentagon, like Lisa and Charley, absolutely were instrumental, and RoRo, of course, in calming the nerves of the nervous people inside the Pentagon and calming down the hackers was also a group effort. RoRo has a hacking background himself, even before his military service. So he's definitely a native of our pirate-y shores.

Lisa Wiswell Coe: This first thing has got to go perfectly. Otherwise, nothing else will ever be able to go.

Katie Moussouris: So, I think the effect inside the Pentagon was, wow those of us who were against this, we were wrong. But I think the Pentagon really understood the significance of what it had done and understood that in order to maybe inspire the next generation of cyber warriors as they call them, that they needed to show that the Pentagon was a place where you could safely hack and there was a vehicle for it, and that they welcomed it. And I know that the ripple effect throughout other governments was really, really intense as well. Definitely, there were other governments that were interested in launching bug bounties.

Lisa Wiswell Coe: The beautiful part about it is though, they came to a yes quite soon afterwards, after the success of Hack The Pentagon. And for me, I always love it when somebody wants to take credit for something because it means that it went well.

Dino Dai Zovi: I still have a little bit of a chip on my shoulder from all the people that reacted really negatively to (No More Free Bugs), because I think this is a fairly reasonable position. Because of that chip on my shoulder, I do feel kind of justified pointing to things like that and say, "Look, see?”

Top two inline images courtesy of Katie Moussouris; third image courtesy of Alex Rice.

]]>
<![CDATA[Decipher Podcast: Zoe Lindsey]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/decipher-podcast-zoe-lindsey https://duo.com/decipher/decipher-podcast-zoe-lindsey Wed, 01 Sep 2021 00:00:00 -0400

]]>
<![CDATA[Uprising in the Valley: When Bug Bounties Went Boom, Part Two]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/uprising-in-the-valley-when-bug-bounties-went-boom-part-two https://duo.com/decipher/uprising-in-the-valley-when-bug-bounties-went-boom-part-two Tue, 31 Aug 2021 05:00:00 -0400

Following the success of the bounty programs started by companies such as iDefense, TippingPoint's Zero Day Initiative, and Mozilla, by the late 2000s, more and more technology companies and platform providers began rolling out bounties of their own. Among the big players to enter the game were Google, Facebook, Yahoo, and eventually, Microsoft. There were plenty of growing pains in these programs and none of them debuted fully formed. Evolution and adjustment have been hallmarks of the successful programs, and from those efforts grew the idea of independent bounty platforms that could run bug bounty programs for companies and handle the intake and triage of the vulnerabilities from hackers. Companies such HackerOne and Bugcrowd helped spur the second wave of innovation in the bug bounty community.

Note: All job titles and positions reflect the person's role at the time of the events.

Read part one here.

Alex Rice (Founder and CTO of HackerOne): One of the first things that myself and Ryan McGeehan did at Facebook, was to put in our responsible disclosure policies. We were one of the first major websites to formalize a policy. Other folks had reporting channels before, but we wanted to draft it through the lens of, "It has to be safe to submit here." So we worked with Marcia Hofmann at the EFF to draft the language around that, and the responsible disclosure program rolled out. It was quite successful and reasonably high volume for a while. Then fast forward a little bit. In 2010, one of the goals that I set for the team was continuous security testing. So we had the idea that we wanted to have a different pen test firm doing a comprehensive, wide open scope pen test of Facebook every week out of the year. So we budgeted for 52 pen tests, we had three people working on the coordination of the scheduling, and we were kicking off a new pen test every week.

We were maxing out capacity, and anyone who looked competent, we'd bring them on as a pen testing pro. One of them was a small group of three Dutch guys, who had sent in a few vulnerabilities as a proof of concept. That was their lead gen for their pen testing business. So they had sent one of those in and ended up using them for a few pen tests that year. And then later on, they ended up being my co-founders for HackerOne. Because we were talking about the experience they had, their lead gen for their pen test firm was responsible disclosure. They'd made a list of 100 tech companies in Silicon Valley, found a vulnerability in every single one of them, and just tried to responsibly disclose it with very poor success rates. But talking to them about that experience was one of the impetuses to tackling this as a more holistic problem, primarily responsible disclosure, but about bounties as well.

Sometime in 2011, Google launched their bounty program. Our continuous pen testing program had been pretty successful the year prior. It was a lot of work. We had renewed it for the next year. We had about 80 vulnerabilities come in from the 52 pen tests that we did throughout the year. And then Google did it and we were like, "Oh, yeah. This sounds like a good idea. Let's do it. We're feeling really confident right now. We're running a pen test every week, we’re finding fewer and fewer vulnerabilities. We're fixing everything super fast. What could go wrong?" So I got a small discretionary budget. We decided we were going to launch it. Had one lunch with Chris (Evans), to just hear how it was going for his team. There would be no problem. We launched it on a Friday for DEF CON, and had some 300 confirmed vulnerabilities by Monday morning. So one of the more humbling, but also eye opening experiences of my career.

Chris Evans (Chrome security lead at Google): Launching the Chromium program was a fair amount of work, but most of it was trying to work out how to get a large corporation to pay individuals! There were no politics or naysayers, only support and curiosity. Corporations do get more conservative as they get larger, but this hadn't happened to Google in 2010 so the launch wasn't impeded. The overall goal of the bug bounty program was singular and simple: Make users of Chromium and Chrome safer. In order to achieve this, we needed to maximize the appeal for hackers to participate. Fortunately, Google has always been chock full of some of the best hackers in the world, so we had some instincts on how to create the modern bug bounty program to appeal to hackers. Sure, we offered reasonable reward values (for the time) and raised them regularly. But we were aware of significant non-monetary motivators for hackers and wanted to account for them.

Casey Ellis (Founder and CTO of Bugcrowd): So there was this group of tech companies that were just everywhere and that were basically having vulnerability input come to them because research is happening whether you invite it or not. And I think the consensus amongst the community at that point was that Microsoft is something that we need to do something about. We need to actually help them improve. And then with Katie and the others that were involved there at the time, Microsoft came to the table with the MSRC, and then obviously extended on that and actually launched a bug bounty program. So they're trail blazers from a corporate standpoint.

Kaite Moussouris (Senior security strategist at Microsoft): Yes, we all talked, but no, none of them had my problem set and none of them could help me. What was funny was, Chris Evans thought he helped me by paying for some of Microsoft's bugs out of his own bug bounty. He started paying for Windows local privilege escalation bugs, and he shows me this right when I come back from maternity leave and I'm like, "What? What are you telling me right now? I don't understand." And he's pointing to a reward in a release note or something on Google, and it said, $5,000 for a Windows local privilege escalation bug, and he points to it on their website, and he's like, "We started paying for your bugs because they were included in an exploit chain of Chrome, and so we didn't want to leave a bug unrewarded, and so we paid for yours."

It did actually help me free up some budget later after we were already doing bug bounties to sponsor The Internet Bug Bounty. And so how I did it was using Chris Evans's example and said, "Hey folks, I know we've launched our bug bounties, and they're in very specific areas, we're not bountying Windows yet, and especially local privilege escalation bugs, which would bankrupt us, so here's what I propose, we can join this Internet Bug Bounty, I'm going to be on the advisory council, why don't we grab a couple of Windows folks, and they can advise on bounty rewards, and then Chris Evans, who's also on the council, has said that Google will only pay for ours until somebody else does, meaning us, I guess, but maybe the IBB. So I've talked to him, he's also on the IBB council, he'll get Google to stop paying for Windows bugs if IBB pays for them, so, let me just peel off 100 grand to kickstart the IBB." And I did, I peeled off $100,000 out of my own bounty budget to kickstart The Internet Bug Bounty. And that's what started it, and that's what got Google to stop paying for Microsoft's bugs.

"I cannot imagine running a world-class security program without making it attractive for every actor on the planet to participate."

Lucas Adamski (director of security engineering, Mozilla): The other part that was interesting was then trying to evangelize this for other companies who were slowly trying to dip their toes in the water. At least people like Katie were trying to drag them almost kicking and screaming into said water, which I definitely commend her for that effort because I don't know that I could have had that much patience that she had. A lot of that was then sort of like a what-about-ism, like what about if this happens, what about if they sell it to the bad guys and the good guys? My response was like, "They can do all that anyway."

Alex Rice: I cannot imagine running a world-class security program without making it attractive for every actor on the planet to participate. Not everyone needs to do it the way that Facebook was doing it, but that was the starting point.

Casey Ellis: In 2012 or so I'd started to experiment with basically introducing gamification into testing. So I'd seen what was going on with bug bounty hunting, I'd been a part of disclosures. I'd done all that stuff. But then in the context of the company I was doing, it's like, "Okay. Does the competitive element and does the diversity of skills applied to this problem space work better?" Because logically it's working for the bad guys. We're up against an army of adversaries. So an army of allies just seems like a logical way to balance the equation. I was already noodling on that. And basically the folklore is, and this actually happened, I took a trip down to Melbourne. I was meeting with a bunch of pen test customers. At the time Google and Facebook had just gotten a bunch of press around their VRP. I was thinking about it and everyone wanted to talk about it. And what I noticed was everyone was like, they'd reached the same conclusion. It's got some Silicon Valley issues to it and all of that, but it's actually more than that. It seems like a logical way to get access to the creativity we need to outsmart the adversary.

And it was cool because it was like, "All right. You guys understand that security is a people problem that tech speeds up. You could just stand up a page and an inbox and invite the internet to come hack you. Why aren't you doing that?" It was a loaded question. I knew there'd be strong answers to it, but that's how I teased it out. And they all said the same things. They basically said, "I don't know if I trust hackers yet. I don't know how to manage the overhead of having a conversation with the entire internet. I don't know if I could fix all of the things that got found. I don't know how to pay someone in Uzbekistan." All of that stuff. And really it was actually on the flight back from that business trip where the light bulb went off that they'd all said the same things. And I had the idea for the Bugcrowd and literally registered the domain and the Twitter handle that night. So that was the bing and it went from there.

Chris Evans: Vendor bug bounty programs were very uncommon in 2010, but certainly not a new idea. For example, Netscape and then Mozilla had been running a program for many years. And in public presentations on Google's various bug bounty programs, we'd always start the story in 1981 with "Knuth reward checks" -- while not necessarily security bug related, Knuth was definitely paying rewards for errors in his books. We did think the bug bounty landscape was ripe for some pioneering and innovation, though. We tried to tackle the space by bringing some fresh ideas and definitely took a "launch and rapidly iterate" mindset.

Alex Rice: The business around HackerOne started early on from those public bounty programs in tech companies. We knew it was insanely valuable for those organizations, we knew they wanted to do it. We didn't expect it was going to be as large as it was going to be. And really, the impetus for it was that most organizations struggle to run a proper bounty program at scale. If you properly incentivize several thousand hackers to go look for security vulnerabilities, most software development teams are not properly staffed to deal with that. You can pine about what that says about the state of technology as a whole, but that's just the reality that we ran into. We would launch programs with even, just security apps, that they could not handle the volume coming in. And not because they were below benchmark or they didn't care, or they were irresponsible, just security teams everywhere are at a massive disadvantage when it comes to remediation. And there's a lot of work that needs to go into them.

Dino Dai Zovi: It became like all right, now there's actual risks to doing it, and also companies started going after the researchers, lawsuits and other things. This is also risky because if you're just coming in off the street and just sending this vulnerability to this company, you are actually putting yourself at risk and you don't have any lawyers. One of the ways that I thought about it is, "Look, if there's a bug bounty, if there's money changing hands, you radically change the conversation because one, now it is not an unsolicited random thing. It is solicited, and because there's payment involved, there is some form of contractual arrangement.

And then now when there's a bounty that says, "Hey, look, here's a scope. Go to town in that scope, and basically here's the reward. So now you know how much the reward for your time could be, you know whether you're going to spend 10 hours trying, 100 or 1,000." That just changed the conversation drastically, and that will also result in more vulnerabilities being reported. More people investing the time, you're reporting those vulnerabilities and just change the economics, change the game. And so that's sort of what I wrote up in that blog post, just to clarify for people and there was still a lot of bellyaching and those other things. Like someone at Microsoft said, "We'll pay bounties over my dead body." Last I checked, he's still alive.

Katie Moussouris: There was Microsoft, and there was me who happened to work at Microsoft, and we were not always aligned, it turns out, on what we thought the best thing to do with the hacker community was. But that's kind of why they hired me. They hired a hacker, it takes one to know one. And they had hired hackers before me, like Window Snyder. So, they weren't allergic to the opinions of hackers, but when it came to what they are going to do in terms of vulnerability reports and payment, in 2008, one of the executives had publicly said in the actual news, quoted as himself saying that as long as he worked at Microsoft, they would never pay for vulnerability information. So before Google had made its move, Microsoft had already said, no way, we're never going to pay. Microsoft controls who it allows to speak to the media pretty heavily, as most corporations do. Now, I was a trained media spokesperson and had been a spokesperson for Symantec before joining Microsoft, I was also trained as a spokesperson at Microsoft, however, even though this was my program, they decided to let an unqualified male speak about it.

So the program he was speaking on my behalf for, and on the behalf of Microsoft, was the Microsoft Vulnerability Research Program that I started in 2008. And then somebody asked the question, if Microsoft is going to start paying bug bounties and whatnot, and that's when he just volunteered this absolute that he was not given any kind of prep to answer, but he just decided since he was in charge of security response at Microsoft, that he would be able to control that, and so he publicly said no, and it was just kind of something that came out of his mouth. So, I had this massive, not just internal inertia to deal with at Microsoft and in my own chain of command, but I also had to deal with the fact that he had made an official public statement as an officer of the company that they would never do this. So, to begin to say that there were differences in my opinion of how things should go, and Microsoft's at the time is an understatement. It was a public disagreement at that point, only I, as a subordinate, couldn't say anything. So how do you like them apples?

"There was Microsoft, and there was me who happened to work at Microsoft, and we were not always aligned."

Dino Dai Zovi: I don't have a good number, but (the research community) was small enough that everyone kind of knew who could land exploits and who couldn't. I remember when I first started working at @stake, I showed up and then I could land real exploits. And everyone was just like, "Who the fuck is this guy? Where the fuck did he come from?" Because it was just a small community, and everyone's just like, "We know everyone and he can land real shit. That's..." This is how one of my friends mentioned it to me, because I was like, "Why does everyone not like me in the office? Why is everyone being weird to me?" And he's like, "Yeah, we kind of could just do whatever we wanted, and then you showed up out of nowhere with actual skills and you wore button-up shirts and slacks and you stayed late. And we were used to just doing whatever we wanted," and that was kind of a buzzkill.

Casey Ellis: So I think vulnerability disclosure, as it exists today, wasn't anywhere near as noisy or topical. There was a subset of security that cared about it, the rest of security that was aware of it, and then no one else really knew what was going on. So the whole idea of no free bugs and all that kind of thing... people that did vulnerability research and people that worked in this space directly were onboard with that and had their opinion on it and whatever else. I think, for the better part, most of the rest of the internet didn't actually know it was even necessarily happening at that point. And that's not a dis on those guys. It's more just that's where we're up to at that point. So I think it was this confluence of different things that came together because there was that original OG disclosure posse, like the Dinos and the Charlies and those guys. And then there was this new wave of folk that were coming in from pen testing or some variation of that that were looking at it and thinking, "I could hack on some cool stuff that I don't get to hack on in my day job." Or, "I can use this as a way to actually transition into security, all sorts of other things." They didn't really have that. They didn't bring that history in with them.

Ramses Martinez (director, Yahoo Paranoids): When I first took over the team that works with the security community on issues and vulnerabilities, we didn’t have a formal process to recognize and reward people who sent issues to us. We were very fast to remedy issues but didn’t have anything formal for thanking people that sent them in. I started sending a t-shirt as a personal “thanks.” It wasn’t a policy, I just thought it would be nice to do something beyond an email. I even bought the shirts with my own money. It wasn’t about the money, just a personal gesture on my behalf. At some point, a few people mentioned they already had a t-shirt from me, so I started buying a gift certificate so they could get another gift of their choice. The other thing people wanted was a letter they could show their boss or client. I write these letters myself.

Katie Moussouris: Here was the thing, Microsoft was open to this kind of romantic idea of flipping offensive researchers to looking at defense. And I kind of let them have that funny idea in their heads that that was a real thing that could happen. I mean, it's not exactly what could happen. More likely is that offense-minded researchers are looking at defeating your existing mitigations, and the next time they see new mitigations, they'll look at defeating those to. They're not necessarily thinking, what would defeat me in my attempt, what would have stopped me? So, you can get them to think about that a little bit more, turns out, if you dangle a quarter million dollars in front of their faces, which is what the BlueHat prize was.

And it was also very deliberately looking for talent. So, we weren't saying we're going to completely outsource this forever, and we're always going to look for our next platform level mitigations from the crowd, we're not going to crowdsource our defenses, but this is a hard place to hire for, and we want to find people who are offensive-minded who can turn that into practical defense. And so it had all of these goals. So originally, I set the top prize for $200,000, which it was, and the Microsoft folks were like, "Why does it have to be $200,000? Why can't it be $100,000?” And I looked across the table and I said, "You know as well as I do that marketing spends more on that Black Hat-DEF CON party than $100,000." At least at the time, this was a decade ago, "But they don't spend $200,000. So if you're telling me that a night of drinking fun is worth more to you, Microsoft, than an entire platform level architectural mitigation, then we just definitely have to understand your priorities more." And they just kind of said, "Okay, 200k it is." I'm like, "That's what I thought."

Chris Evans: We were aware of significant non-monetary motivators for hackers and wanted to account for them. First, hackers are highly motivated by conversation and collaboration. So we didn't place any restrictions on publication. Money wasn't used to buy silence -- quite the contrary, we encouraged quality write-ups and publications for interesting issues. And critically, we made sure we set the reporting channels up so that we had "hackers talking to hackers and engineers". A bug report wasn't just a report, it was a two-way conversation and discussion of possibilities and ideas. One failure mode you unfortunately still see is when hackers talk exclusively to a triage/response organization, cutting off discussion from security experts and/or owners of the code in question.

Second, hackers are motivated if you take their work seriously. One way you can express this is to fix reported vulnerabilities quickly. It's easy to underestimate the power of this, but not only does it motivate hackers to work with you again, it obviously makes your users safer so it's a win-win. Another way to take someone's work seriously is to celebrate it. So we made sure to credit rewards in the main Chrome release notes, as well as referencing great findings and researchers in official blog posts. Third, hackers love to challenge themselves. We used a little bit of signposting in our rewards structure and it really worked. Aside from the obvious idea of paying more for more serious vulnerabilities, we also had bonuses for particularly interesting or creative research; for high quality write-ups; for good analysis of exploitability; and for providing a patch to fix the issue. We were serious about these bonuses and we used them generously.

Alex Rice: From the very beginning, we tried to design this from the researcher's experience in mind. It's always been a continuous process of learning for us, where it's like, someone will take issue with this term and we'll adjust them. I think we pretty quickly got to a state where the vast majority of researchers are comfortable doing this. The only scenarios we arrive at today are people that want to be completely anonymous, which we just can't facilitate. We can keep them anonymous from the customer, but not from law enforcement, and not from ourselves. That's the only thing we hear today around, resistance of not wanting to use this process. It makes sense. Security researchers have inherently been distrustful. There are scenarios where people will uncover vulnerabilities where, the circumstances through which they discovered that vulnerability, is not something they want to disclose. But they're still trying to do it through the disclosure. So we'll handle those out of hand as we need to, but today that's the blocker that we have on it.

"Hackers are highly motivated by conversation and collaboration. So we didn't place any restrictions on publication. Money wasn't used to buy silence."

Katie Moussouris: So, is it money? Is that it, we just need to pour more money into it? The answer's no. When we modeled the system, we basically were like, "Look, people come in, they aren't born with this skillset, they have to grow it somehow, and both the offense side of the market, and the defense side of the market need to grow people with these skill sets." So let's take a look at the populations and see what we can tell about these populations and how long are they able to find vulnerabilities at the top of their skill set level, the real zero-days, the zero-click vulnerabilities, jailbreaking an iPhone, that level of skill, right?

And there's maybe a few thousand people worldwide at any given time. Why? Because new people are scaling up, while the people who have been there for a while are deciding to do something else with their lives, or simply being outpaced by the technology. People who could write exploits for Office 2010 might not be able to write an exploit for modern Office with its current mitigations, that may have outpaced their skill level. So essentially we looked at that, and then we said, "Well, we're throwing more money at it, tip the advantage to defenders." And the answer is, not really, because it doesn't actually speed them up in knowing what they need to know and learning what they need to learn. What we found was that investing in tools to determine whether or not a particular bug is exploitable, that was the key place. That if we could invest in more tools that would determine exploitability.

Chris Evans: The success of the Chromium program was clear quite quickly. The obvious next step was to launch a broader program for Google: the Google Web program. The Google Web program was in many ways more novel than the Chromium program. I believe it was the first major program to target live web apps backed by live services. Launching was one of those situations where it would have been easy to "what if" the whole thing and panic ourselves into not launching -- perhaps on some legal, safety or PR concern. Fortunately, no one was pessimistic or overly conservative, so I drove forward with the launch. I was still in the Chrome org at the time so while I decided we should do this, many others stepped up to do the actual preparation and hard work. I think we all owe them a debt, because a bug bounty program for web properties and services is now a firm industry best practice with hackers, corporations and ultimately end users benefiting.

Pedram Amini: Look at what it takes to pop Chrome. It's not from my time where it's a single bug that you can exploit with relative ease. You're talking about chaining a dozen things together. It's black magic basically to get exploitations to work in something like Chrome.

Tomorrow: Part three.

All material from author interviews, except Ramses Martinez quote, which is from a Yahoo blog post.

First image courtesy of Katie Moussouris; second image courtesy of HackerOne; third image CC By 2.0 image from Flickr.

]]>
<![CDATA[Lawyers, Bugs, and Money: When Bug Bounties Went Boom]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/lawyers-bugs-and-money-when-bug-bounties-went-boom https://duo.com/decipher/lawyers-bugs-and-money-when-bug-bounties-went-boom Mon, 30 Aug 2021 06:00:00 -0400

For as long as there have been computers, there have been bugs. That’s damn near 100 years and an uncountable number of bugs, some big, some small, some with wings, some that lived for decades. If they were discovered at all, most of those bugs were probably found by the developers themselves or another technical user and then fixed without much fuss. It wasn’t until much later that some bugs became security concerns and developers and researchers began thinking about them in terms of their potential effects on the confidentiality, integrity, and availability of a system. Finding and fixing bugs was gradually becoming both more difficult and more important. In 1983, Hunter & Ready was so confident in the quality of its new VRTX real-time operating system that the company offered to reward anyone who found a bug in it with a Volkswagen Beetle. “But don’t feel bad if a year from now there isn’t a bug in your driveway. There isn’t one in your operating system either,” the magazine ad said. It was a clever idea but it didn’t inspire many imitators. Not until 1995 when a startup called Netscape Communications offered cash and Netscape merchandise to people who reported security bugs in the new beta release of its Navigator 2.0 browser.

And so, the bug bounty was born. Or at least conceived.

It would be another nine years before the idea truly took off when Mozilla unveiled its Security Bug Bounty Program, which paid the astounding sum of $500 for reports of critical security bugs in its applications. In the 17 years since Mozilla started its program, software providers, hardware companies, social media platforms, cloud providers, and even the Pentagon have taken the idea of a bug bounty and modified, reshaped, and remixed it. Today, bounties from public programs can reach six figures and there is a significant community of professional bug hunters who make their living from those bounties. Once dismissed as a novelty, bug bounties are now de rigeur. This is the story of the hackers who turned a niche idea into a worldwide industry. This is the first in a three-part series.

Note: All job titles and positions reflect the person's role at the time of the events.

Lucas Adamski (director of security engineering, Mozilla): The strength of any security system, to me, is simply a function of how many smart, motivated people have looked at it over a period of time. That's it. It's got nothing to do with who wrote it, in my opinion, almost. It's almost about who has actually tried to break it, and that's what results in a strong system. So bounties were a way of saying, "Okay, we can only have so many people we hire." Mozilla originally was founded with not really intent to hire anybody. It was meant to be contributors. So the bounty program was just so we'd be getting contributors on the security side. I think the controversial side about it is, okay, why are we paying them because we weren't paying any other contributors other than full-time. If we do a security value program, why don't we just do a feature value program? Anybody who fixes a bug or creates a feature is also going to get paid. That was a big culture discussion. I think there were some distinctions. First of all, there already is a market for these work products, black market. It's not like people are generally getting paid to fix bugs or generate features randomly unsolicited. There's no market for, oh, I have a feature to flip a bit in something. Nobody's going to pay you randomly for that code you wrote, but there is definitely a market for the bug, an underground market for it.

Pedram Amini (founding member of iDefense Vulnerability Contributor Program and Zero Day Initiative): I went to Tulane and I didn't go to any classes my freshman year. All I did was stay up and hack the network because there was just so much stuff to play with. You could spend all day and night just discovering what was out there and what is this system? Let me go play with it. And I would find things. And when I would go to report it, because I did that every time the school was so supportive, they could have easily kicked me out the first time I brought something up. Instead, not only were they supportive, but they encouraged me. But one of the things they would always say is you remind us of this guy, Dave Endler. Who the hell is this guy? Every time I go to tell somebody about something, someone new on campus, they're telling me about the name of this person. So fast forward to my senior year. And at this point I've made a couple of publishings on Bugtraq and Full Disclosure, which was the way that you publish anything back then. And so, because I had a few postings, iDefense who was Dave Endler and Sunil James and Mike Sutton, they came up with the idea of an open bug bounty. Let's buy vulnerabilities from people and report it to the vendor and we'll have pre access to it, part of our information feed that goes to our customer base. So to get it started, they scoured Bugtraq and Full Disclosure and they enumerated the folks that had published a few advisories and they sent outreach to them announcing this program that they were starting.

And so I get this email and it's got Dave's name on it who I've been hearing about for years. And I wrote him back. I'm like, this is neat. Who do you guys have on your end actually validating these things because I love the idea. I'd love to be involved. He's like, "Actually nobody. I'll be in town next week, visiting buddies, why don't we meet and see if something can be worked out?" So when I graduated school, after this meeting, my first job was with iDefense as the first person to sit there and validate these things on our end.

Aaron Portnoy (security researcher at Zero Day Initiative): I began working at the ZDI as an intern in 2006, the year after it had formed and prior to Pwn2Own. My main responsibility was verifying incoming zero-day submissions from our pool of external researchers. The process could be quite time consuming as it required installing and setting up software, debugging the issue, reverse engineering the root cause, verifying exploitability, and finally suggesting appropriate compensation.

Pedram Amini: And let me tell you, there was trolling, they were giving us just some things you didn't even want to validate. But the company has put budget there and we're like, let's just do it. Let's show people that we're paying them. Let's publish advisories at some point, this will change. And sure enough, it did.

Aaron Portnoy: As the youngest member of the team I tended to work through my analysis queue more quickly than others, motivated both by my love of reversing but also the ever-present pressure from the submitting researcher who was awaiting a determination. This was why I ended up being the team member chosen to be on-site and adjudicate the Pwn2Own contest when we launched it in 2007. In the following years I was promoted to manager of the ZDI and inherited the responsibility to craft future contests and rules, which I did for the following 6 occurrences.

Pedram Amini: I had hired Aaron when he was 16 years old. He had come into the office. One of the sales guys came into my room like, "Hey, there's a dude here. He wants to do some technical work." And he mentioned the word OllyDbg, which at the time was a debugger that was kind of new age. And when I heard that, I'm like, let me see what this guy can do. So went in, met this young kid, I had to talk with his mom. And she said, "I don't want him working so young." I'm like, believe me, I'm a dinosaur.

Dragos Ruiu (founder of CanSecWest conference): It was my idea. What happened was, there was a gentleman from the UK and he gave us a presentation on Apple vulnerabilities. And he had a real unfortunate history because he had had a run-in as a kid with law enforcement. And he got basically a slap on the wrist, he was a brilliant guy, he was just poking around boundaries of laws in those days. But he was very, very nervous. I, as usual, trying to do the good thing, send Apple a copy of the presentation. So Apple responded after I sent them the presentation with the cease and desist letter from their lawyer to him. And he's like, "Eh." I was just pissed. And this was right around the time when the common refrain was, "Apples don't get hacked. They're secure." Because Windows was just shit in those days. It was like, you breathe wrong and you'll get a blue screen. And so I said, "Okay, well forget this." I was going to get a MacBook, I'm going to put it in a room, and the first person to hack it gets to keep it. And I was like, "I'm going to invite a bunch of reporters here to watch this because Apple screwed me out of a great presentation, dropping a whole bunch of important security information. I'm going to replace this with this spectacle."

Dino Dai Zovi (security researcher, co-winner of the first Pwn2Own contest): So when I was doing it, the stuff online just said, "Get the free laptop," but when I was sitting in my apartment, I was like, "I'm just doing this for the credit," and kind of just to show off really. But because my friend Shane (Macaulay) was actually in Vancouver, I was like, "Yeah, you get the laptop. I just bought a MacBook Pro, I don't need one that's identical specs. I don't care." But I was finding my exploit, you're just running it on the keyboard. Just to make sure I'm clear about that, because I'm in this to show off. Absolutely what I'm in it for. Because I was 27 or something. So I did that. But then the next day, when they said, "Oh yeah, it worked, the exploit worked, blah, blah, blah." I'm like, "Awesome." And they're like, "Oh, by the way, ZDI wants to offer $10,000 for the details of the bug. You want to talk to them?" I'm like, "Sure."

Dragos Ruiu: So right around that time, it was kind of an accident, someone came in and said, "Hey, so what are you going to do with that exploit that somebody's going to use for the MacBook? Can we buy it from you? Can we put up a cash prize for that?" And then, I don't remember, maybe that was 5K, and then I believe it was Aaron and company, said, "You know that 5K, can we make that a 10K prize?" So this started a little bit of a bidding war. And so that was the very first one.

"Oh, by the way, ZDI wants to offer $10,000 for the details of the bug. You want to talk to them?" I'm like, "Sure."

Charlie Miller (security researcher, four-time Pwn2Own winner): On one of the Pwn2Owns, I had two Safari exploits, I think. Back then, the way Pwn2Own worked was, and still, probably to this day, it's not super well organized, it's run by a bunch of hacker dudes so you never know exactly what you're going to get. It takes me a long time to write an exploit. So I would see Pwn2Own is in a month or two. So I was like, "I'm going to find two Safari exploits and then I'll win the thing twice and I'll get twice the money as last year. That sounds like a good deal." Then, a week before Pwn2Own, they announced the rules and you can only win one. So I won and then I had this other exploit, right. I was like, "Well, what am I going to do with this thing?"

I can't use it in a contest and I could just report it to Apple, I guess, but I didn't really see the point in doing that. I mean, obviously the point is it makes everyone more secure. But for me, personally, there wasn't much to get out of that. I guess I could have given it to ZDI or something, but I guess, but there weren't really any options to do that. I wasn't going to give it to a bad guy. So basically I just didn't do anything. I just did nothing. Then the next year Pwn2Own came on again and I was like, "Oh, I wonder if that exploit still works?" I tried it and it did so then I won the next year with the same exploit from the year before. So the bad news is, for a year, there was this exploit that existed and, presumably, I'm not the only one who could have found it. Some other people might've had that same exact exploit. So people were vulnerable for a year.

Pedram Amini: One of the things we saw at both VCP and ZDI is we had a decent statistical sampling of researchers and vulnerabilities around the world. Obviously not anywhere near the whole ecosystem, but even with that good enough view, we found a lot of overlap. In one case, I remember three different researchers from three different parts of the planet had found the same bug in three different ways and submitted to us around the same time. So we knew for fact that overlapping research is happening and it's happening with frequency. So that justifies the value of all this too because somebody's weapon might become moot because someone else altruistically through the program reported it and it went to the vendor and it got fixed.

Aaron Portnoy: As a fairly young researcher responsible for what became a highly visible contest, I can say it was definitely a diplomatic challenge and learning experience for me. The very first year was an experiment and most people who first heard of Pwn2Own thought of it as a gimmick for marketing purposes. However, once the first news cycle hit and the breadth of coverage spread to mainstream outlets, the affected vendors started to take serious notice. As the years progressed many of the recurring vendors would even schedule their patch cycles to kill bugs immediately before the contest was held, hoping to invalidate potential negative outcomes. The more the contest grew, the more I had to work on establishing relationships with all the parties involved--from the researchers, to the vendors, to the press. As you could imagine, the larger the contest became, the more pressure the vendor representatives were getting from their legal and marketing teams. For example, in the early days of Pwn2Own our team would take the exploits for analysis and deliver them to the affected vendors after the event. This created a period of time where vendors were out of the loop but still had to respond to the massive amount of press coverage. That process evolved over the years and culminated in a "war room" whereby the disclosure happened on-site immediately after a successful demonstration, which is certainly a more collaborative solution and allowed us to foster a trusting relationship with vendors.

Dragos Ruiu: And ironically, this is the funny bit, in those days our biggest supporter was Microsoft, who was really sending a lot of guys. That was when they were just getting into gear. And so they were really, really being supportive of the security industry. So in a way, it was Microsoft money that bought that (Apple) laptop.

Charlie Miller: So the downside of not having these kinds of bug bounty programs is you get this sort of situation where people are not incentivized to report bugs. Then people like me don't and then I get to win a contest a year later because of it. So it all worked out in the end, that bug got fixed.

"Pretty quickly, people were calling us extortionists."

Aaron Portnoy: When Chaouki (Bekrar, CEO of VUPEN) showed up, he brought the all-star team of six guys you don't know about. And he gamed the contest in a way that our rules weren't really accounting for. The whole state of everything got to a point where you couldn't have one guy do everything. You couldn't have one guy find the bug, exploit the bug, get past the mitigations... so they need a team. And once it gets to a team level, it's basically like the Olympics at that point. In hindsight, the motivation of the contest was initially simply to host a spectacle event for offensive security researchers for bragging rights. As we realized how much impact it ended up having, it became clear that simply the yearly demonstration of what was possible in a real-world scenario was an important awareness campaign--not just inside information security, but more importantly to the Internet community as a whole. As researchers ourselves, we knew what was possible because we dealt with zero-day vulnerabilities on a daily basis. It wasn't until Pwn2Own where those outside infosec were exposed to the fact that the devices they most trust are one motivated reverse engineer away from being compromised. Additionally, with the increasing awareness of the offensive-focused exploit market, Pwn2Own was able to offer an alternative outlet for research that did not include operations against unwitting targets.

Dragos Ruiu: So doesn't that mirror what was happening in the exploit development industry right at the time? You still have, even these days, you can still have the guy that runs it from beginning to end, does some cool shit. But these days it's always four or five, you got your fuzzer guy, you got your stack, your heap exploitation juju, magic ROP guy. You've got your guy who does the KLM thing. Everybody's got these multifunctional teams, and that's what it takes to play these days. We've seen some two men teams, but it's two, three, four men teams that are usually the guys that are pulling it in to do this kind of stuff. It's because the scope of the exploitation now has become where it's really hard for one guy to be an expert in all of this crap.

Dino Dai Zovi: So obviously there's always kind of a black market, but that's a pretty dangerous road to go down if you don't know who you're dealing with. Basically, you should just assume that if you engage in a transaction like that, this is someone heavily engaged in cybercrime and you're an accomplice to it now. That's a terrible idea. And again, when you also realize, it was pretty obvious in 2000 that a lot of the energy behind a lot of cybercrime got connected pretty fast to Russian organized crime. They kill people. Do you want them to know who you are? Do you ever want to have a conversation anywhere near them? No. You do not.

Aaron Portnoy: I think it became very obvious that the secret 0-day market was going to appear. Every year at Pwn2Own, when the numbers went up, and the contestants went up, and then the difficulty went up. It's not hard to figure out where that's going to go.

Lucas Adamski: We were actually trying to compete with it head-on because that was a big part of the conversation too, is like, "Well, the underground market can pay ten grand, a hundred grand, maybe more for an exploit, but they're paying for something very different." They're actually paying for an exploit, and also they're paying for exclusivity. They're going to pay a lot of money, but you're going to have to weaponize it, and weaponizing stuff is a lot of work, and it's a pain in the butt. Also, you have to be a little ethically compromised maybe to do this. So we provided an alternate path.

Dino Dai Zovi: The idea (for No More Free Bugs) came from a night out in New York. And (Charlie’s) like, "Hey, I'm in town, let's get a drink." And so we're having a drink at the bar and then I think Alex (Sotirov) came out too, and I can't remember exactly the order. Then Charlie told us a story about him reporting a bug or basically his presentation, and we're going to have some vulnerabilities in Android.

Charlie Miller: The way I remember it was, we were at (CanSecWest) and that conference was unique, at least as far as the ones I go to where it's only one track. So everyone is there all the time together. During one of the breaks we were sitting around outside and we started talking about this idea about how we do all this research and no one pays us. There's people who work for the companies that are doing the exact same thing as we do and they get a paycheck. So then we just had the idea that we were going to do this no more free bugs thing. I mean, it was totally spur of the moment, we just grabbed, I don't know why it was even there, but there was some old boxes. We found a marker somewhere and we made the sign. So Dino and Alex held the sign while I proselytized about it, at the mic. So that was how it went down.

Dino Dai Zovi: So basically that was just an over beers discussion. And then months later at CanSecWest, maybe even six months later, I can't remember, they have lightning talks at the end, and Charlie's like, "Hey guys, I want to do a talk about No More Free Bugs," and we're like, "Cool." I think it was Charlie's idea, of, "Let's make a sign." And so Charlie and I are using markers on this cardboard saying, "No More Free Bugs." And we were laughing at the mental image of a cardboard sign, like, "Will hack for food." And then it got some legs and I was like, "All right, I'm going to write a blog post about this, because I think people can pretty easily take it the wrong way." And so I wanted to just put something in writing versus just a bunch of stuff, because pretty quickly, people were calling us extortionists.

Charlie Miller: A lot of people I talked to really did stop reporting bugs. Then, nine months later, I was having a talk with Dino or someone about it and it occurred to us that this wasn't really effective. It wasn't really doing anything that we wanted. So what happened? So in nine months we stopped reporting bugs to them. So that's a good thing. But from the company's perspective, they don't necessarily... you can't really measure the security of their products. All you can do to see, oh, there's always patches. So there's a nine month period or whatever, where they're not getting any reports. Or at least not as many. So they're not having to make so many fixes. So, in essence, it looked like their software, all of a sudden, got really secure. But that's not what happened. So it had the opposite effect. We were like, "Oh, the companies are going to be like, "Oh my God, please keep reporting bugs." But really, they were happy as hell. It was like, "Oh, sweet, stop reporting bugs. That's even better. Now we don't have to fix anything. Now it looks like our software's secure. Now we don't have to deal with these researchers." So it didn't really work, I think. It might have worked in raising public awareness that this is an issue and, hopefully, some of these young hacker types and researchers went on to become CSOs who thought that paying for this research is important. But, at the time, I don't think it really had the intended effect with the companies.

Tomorrow: Part two.

Header image Creative Commons license from Garrett Gee's Flickr stream; second image courtesy of Ryan Naraine; third image CC license from Garrett Gee; fourth image by author; fifth image courtesy of Ryan Naraine.

]]>