<![CDATA[Decipher]]> https://decipher.sc Decipher is an independent editorial site that takes a practical approach to covering information security. Through news analysis and in-depth features, Decipher explores the impact of the latest risks and provides informative and educational material for readers curious about how security affects our world. Tue, 16 Jul 2019 00:00:00 -0400 en-us info@decipher.sc (Amy Vazquez) Copyright 2019 3600 <![CDATA[Persistent Cookies Can Prove Troublesome for AWS]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/persistent-cookies-can-prove-troublesome-for-aws https://duo.com/decipher/persistent-cookies-can-prove-troublesome-for-aws Tue, 16 Jul 2019 00:00:00 -0400

For both attackers and penetration testers, phishing has been a go-to move for about two decades and it continues to work quite well today. It’s a reliable way to harvest users’ credentials for all sorts of apps and services, including cloud platforms, and a researcher has discovered that some cookies used for authentication on Amazon Web Services remain valid even after the victim has changed the password and logged out of the account. That means that an attacker who is able to phish a victim’s username and password will have persistent access to the victim’s AWS account, even with multi-factor authentication enabled.

In many phishing scenarios, the use of MFA is a solid defense, but there are lots of different MFA factors and some established methods for obtaining MFA codes that allow attackers to circumvent some of those protections. One way to do that is to force victims through a reverse proxy on the way to the phishing page the attacker has set up. That enables the attacker to intercept the victim’s traffic and record both the credentials and any MFA code she would enter when prompted. Spencer Gietzen, lead cloud pen tester at Rhino Security Labs, discovered on a recent customer engagement that this method not only worked against AWS accounts with MFA enabled, but also collected the victim’s AWS authentication cookie. In his research, Gietzen used Modlishka, a reverse-proxy framework released earlier this year.

“Because Modlishka is just proxying the AWS web page to our target user, it will be able to phish them, regardless of whether or not they have MFA enabled. Once the user is prompted for their MFA code and enters it, Modlishka will interject. It will create a valid login session with the AWS web console, store the details, then send the target user off to the actual AWS website and away from our phishing page. The user’s credentials and cookies will be stored in the cookie jar,” Gietzen said in a post on the phishing technique.

While using this technique, Gietzen found that it also collected the authentication cookies that AWS issues to users, as well. Cookies typically will expire once a user logs out of the associated service or changes the password. But Gietzen discovered that wasn’t the case with the AWS cookies. Instead, he found that they lasted for about 12 hours, even if the victim changed the password on the phished account and logged out.

“For AWS users in particular, going with a hardware-based MFA device (like a Yubikey) is the way to go."

The technique Gietzen developed is specific to the AWS Identity and Access Management system, which allows the use of several different form factors of MFA, including mobile apps and hardware devices. The technique would not work against a hardware-based U2F token, but those tokens aren’t usable for everything on AWS, such as work done through a command-line interface.

“In this testing, we ‘phished’ our own user account and then logged into the web console with the stolen cookies. Acting as if we were a user who was just phished, we changed the password of our user and then logged out in hopes that it would invalidate the session that the attacker stole. We found that after doing this, the session was still valid. This showed us that even after a target user changes their IAM user’s password and logs out, the phished cookies could still create a multi-factor authenticated session to the AWS web console,” he wrote.

“We investigated the limitations of the cookies further by removing the MFA device used on that phished user’s account. We then replaced it with a brand new MFA device, changed the password for the user, then logged out of the web console. Even after all of that, we were still able to use the cookies we originally stole to create a multi-factor authenticated session to the AWS web console.”

When Gietzen reported what he’d found to Amazon, the company said that the behavior is intended and is designed to allow long sessions for workloads that take extended periods of time, even after the user has logged out.

“Though this does make sense from a usability standpoint, it is still a concern for security. Because AWS says that the cookies are working as intended, this behavior will likely not change,” he wrote.

Gietzen said there are some effective defenses against this technique, with the most effective being the use of a hardware U2F key, which is highly resistant to phishing.

“For AWS users in particular, going with a hardware-based MFA device (like a Yubikey) is the way to go. It would prevent this attack because of some additional security features that are used by those devices and modern web browsers (URL verification mainly). Another option would be to remove IAM users completely from your AWS environment, and to only rely on IAM roles/temporary credentials, rather than long-lived usernames and passwords,” he said via email.

]]>
<![CDATA[Deciphering Spy Game]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/deciphering-spy-game https://duo.com/decipher/deciphering-spy-game Mon, 15 Jul 2019 00:00:00 -0400

Spy Game isn't explicitly a sequel to Sneakers or Three Days of the Condor, but it's certainly a spiritual successor, with all of the high-stakes espionage, double-dealing, and shady characters that made those films classics. Also: Robert Redford. Spy Game serves as a master class in social engineering and influence operations and demonstrates exactly how powerful the art of persuasion can be. This is Deciphering Spy Game.

]]>
<![CDATA[Moody's Says Regulation Would Benefit Gas Pipeline Operators]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/moody-s-says-regulation-would-benefit-gas-pipeline-operators https://duo.com/decipher/moody-s-says-regulation-would-benefit-gas-pipeline-operators Mon, 15 Jul 2019 00:00:00 -0400

Spurred by fears that nation-state cyber-attackers may shift their attention to United States critical infrastructure, lawmakers and federal regulators are increasingly talking about cybersecurity standards for the natural gas pipeline system, similar to what currently exists for the power grid. A new report from Moody’s Investors Service said imposing mandatory cybersecurity regulations would be “credit positive” for operators and the utilities.

“The implementation of mandatory standards for gas pipelines is credit positive because it would force any late adopters of the standards to strengthen their baseline defenses, which would in turn make them less of a target for cyber attackers,” according to a recent report from Moody’s Investors Service.

By “credit positive,” Moody’s means that imposing regulations would have a positive effect on utilities and operators’ credit-worthiness, or the ability to borrow money and attract investment.

Like the rest of critical infrastructure, gas pipeline operators increasingly rely on networks of sophisticated computers to manage the flow of natural gas across state lines. Tampering—or interfering—with these systems would cause a disruption in how natural gas is delivered around the country, which is why attackers consider these networks as “prized” targets, the analysts said. Pipeline operators are not required to report incidents if they aren’t deemed material by the company.

“The US natural gas pipeline industry, despite having become the primary supplier of fuel to the US power generation fleet, is not covered by federally mandated cybersecurity standards,” Moody’s analysts wrote. “Complete data on the number and scale of attacks is not readily available.”

Power Grids are Regulated

In contrast, the power sector is regulated. The North American Electric Reliability Corporation Critical Infrastructure Protection (NERC CIP) consists of nine standards and 45 requirements covering areas such as the security of electronic perimeters, asset protection, disaster recovery planning, personnel training, and security management. Utilities can use the standards as a baseline cybersecurity strategy and build upon them to address their specific requirements.

The Federal Energy Regulatory Commission also requires utilities to report attacks on electric grids, even when attacks do not cause service disruptions.

“Requiring entities to report attempted cyber intrusions, as well as successful ones, is an important step toward enhancing the collection and distribution of information on rapidly evolving cyber threats,” FERC Office of Electric Reliability Director Andy Dodge said at a recent hearing before the House Energy and Commerce Committee’s energy subcommittee.

The disconnect between the two industry sectors—despite the fact that they are tightly linked and dependent on each other—“leaves a significant vulnerability in the utility industry's cyber risk management,” Moody’s said.

Mandatory cybersecurity standards should be viewed as a starting point as it would help guarantee that all pipeline operators and utilities— even the late adopters—are investing in security defenses, “at least to the level required by law,” in order to avoid regulatory fines, Moody’s analysts wrote. Regulation would force operators to increase investments in this area to make the natural gas pipeline sector “more difficult targets for attackers.”

Federal standards would also help pipeline operators recover the costs of investment.

“As a regulated asset, natural gas pipelines charge rates that can be adjusted through rate case proceedings to recover prudently incurred costs,” Moody’s said.

Attempts to Self-Regulate

When it comes to critical infrastructure, there is a tug-of-war between companies urging self-regulation and regulators arguing that mandatory requirements would ensure that baseline defenses are in place for everyone. The previous American Has Association president and CEO Dave McCurdy has said the association’s member companies have made progress in improving their cybersecurity postures through various initiatives such as data sharing. Companies can assess their defenses with tools such as the Department of Energy’s Cybersecurity Capability Maturity Model.

The Transportation Security Administration currently runs the natural gas pipeline security program—and industry oversight is weak, a Government Accountability Office (GAO) audit report in January found. Part of the weakness stems from the fact that the TSA only has the equivalent of six full-time employees supervising the entire industry, which includes natural gas transmission pipelines and pipelines transporting oil and other hazardous liquids.

The number of TSA critical facility security reviews of pipeline facilities has fallen sharply since 2010, Moody’s analysts wrote in the report, citing the GAO audit.

“We know that regulation is not a panacea, but rather, for many, it is a ceiling and creates a burden of compliance which takes away from security efforts and resources,” McCurdy said in a message to the association’s members back in February. AGA would rather see the TSA receive more funding and authority to inspect and audit the cybersecurity of the pipeline systems, McCurdy said.

Investor Focus on Security

Moody’s has been increasingly focusing on security as part of its industry and company analysis, recognizing that cybersecurity is an important part of assessing the company’s risk profile. In March, Standard & Poor’s downgraded credit bureau Equifax as a result of its 2017 data breach. Moody’s also revised its outlook of Equifax and cited the breach as one of the reasons.

The ratings agency also recently announced a joint venture with Israeli company Team8 to assess how vulnerable businesses are to cyber-attacks and create a global benchmark. The framework would allow organizations to measure their defenses and preparedness in comparison to other businesses and over time. The venture is separate from the credit ratings service.

]]>
<![CDATA[Privacy Group Asks FTC to Investigate Zoom]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/privacy-group-asks-ftc-to-investigate-zoom https://duo.com/decipher/privacy-group-asks-ftc-to-investigate-zoom Mon, 15 Jul 2019 00:00:00 -0400

A prominent privacy rights group has filed a complaint with the Federal Trade Commission against Zoom, asking the commission to open an investigation into the company’s practices after a security researcher discovered several vulnerabilities in the Zoom video conferencing client for Macs last week.

The complaint filed by the Electronic Privacy Information Center (EPIC) alleges that “Zoom intentionally designed their web conferencing service to bypass browser security settings and remotely enable a user’s web camera without the consent of the user”. EPIC also alleges that Zoom officials didn’t respond to the researcher’s reports quickly enough, putting users at “risk of remote surveillance, unwanted videocalls, and denial-of-service attacks”.

EPIC’s complaint, filed July 11, comes after security researcher Jonathan Leitschuh disclosed three vulnerabilities he found in the macOS Zoom client earlier this year. The most serious of the vulnerabilities allowed an attacker to force a victim to join a video call with her camera turned on. Another bug could be used to send a victim’s Zoom client into an infinite loop of trying to join Zoom calls. The bugs are all connected to the presence of a local web server that the Zoom macOS client installs. The web server stays behind even after the client is removed and will respond to requests.

Leitschuh informed the company about the weaknesses in March and went through several months of emails and calls with the company’s security team about the severity of the problems and potential fixes before ultimately disclosing the flaws on July 8. Zoom officials initially defended the presence of the web server as a workaround for a setting in Safari, but then issued a patch that removed the server once a user uninstalled the Zoom client. Apple took actions of its own, as well, pushing a silent patch to Macs that removed the web server before the Zoom patch was ready. Security researchers criticized Zoom’s slow response to Leitschuh’s report and Zoom CEO Eric Yuan said on July 10 that “we misjudged the situation and did not respond quickly enough.”

In its complaint to the FTC, EPIC said that the installation of the local web server and Zoom’s slow response put its customers at risk without their knowledge and without the ability to defend themselves.

“Zoom’s actions—including its decision to install a hidden web server on users’ Macs and require consumers to manually change their default camera settings—placed users at risk of severe violations of their privacy. Zoom customers risked consequences including: remote surveillance through hackers viewing a video stream from users’ computers without their knowledge, an attacker implementing a Denial of Service (DOS) attack through sending repeated HTTP GET requests, or users being launched into a video call with an advertiser without his or her consent. These privacy intrusions can have severe results, from illicit photographs or video being taken for sale to distribution of information for the purposes of physical harm,” the complaint says.

The group asked the FTC to investigate Zoom’s actions in this case and also to look into the vulnerabilities that Leitschuh discovered. EPIC also asked the commission to force Zoom to notify all past and present users about the flaws and the available patches, remove the local web server from every customer’s machine, and change the default video setting for calls to off. On July 14, Zoom issued an update that makes a change to the default video setting, but doesn’t turn it off completely.

“Zoom has implemented a video preview feature that pops up before any participant joins a meeting where their video will be on. The participant is able to opt to join with video, opt to join without video, or dismiss the prompt to not join the meeting at all. Additionally, the participant may also check a box to always see the video preview when joining a video meeting (this box will be checked by default),” the company said.

EPIC has had some notable successes with this kind of complaint in the past, including a 2009 complaint against Facebook over privacy settings. That complaint led to a 2011 consent order by the FTC against Facebook, which led to a reported $5 billion fine just last week.

]]>
<![CDATA[Mayors Pledge No More Ransom Payments]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/mayors-pledge-no-more-ransom-payments https://duo.com/decipher/mayors-pledge-no-more-ransom-payments Fri, 12 Jul 2019 00:00:00 -0400

Jackson County, Georgia. Riviera City and Lake City, Florida. When ransomware came knocking for these municipal networks, officials ponied up the money—to the tune of nearly $1.5 million—to recover from the infection ravaging their systems. But no more: as attackers lob more ransomware against municipalities, mayors from around the country pledged to not meet ransom demands.

"NOW, THEREFORE, BE IT RESOLVED, that the United States Conference of Mayors stands united against paying ransoms in the event of an IT security breach," the mayors wrote in a resolution adopted at the 87th annual meeting of the US Conference of Mayors.

The resolution, which was adopted unanimously by 1,400 mayors representing cities with a population of over 30,000, is not legally binding, but will likely be used by mayors to explain what they are doing incase of a ransomware attack against city networks. At least 170 county, city, or state government systems have experienced a ransomware attack since 2013—and 2019 has already seen 22 attacks, the conference said.

The resolution is in line with recommendations from federal authorities, including the Federal Bureau of Investigation and the Department of Homeland Security. Paying the ransom give criminals incentive to attak more, and also finances future criminal operations.

"Paying ransomware attackers encourages continued attacks on other government systems, as perpetrators financially benefit," said the Opposing Payment To Ransomeware Attack Perpetrators resolution. The mayors are interested in “de-incentivizing these attacks [ransomware infections] to prevent further harm.”

"The United States Conference of Mayors stands united against paying ransoms in the event of an IT security breach."

Riviera City paid 65 bitcoins, or approximately $600,000, and Lake City paid 43 bitcoins, or roughly $460,000. Jackson County paid $400,000, which at the time was around 100 bitcoins.

Not everyone pays. Last year, Atlanta declined to pay the $51,000 ransom, but paid dearly for that principled decision: the damage has been estimated to cost $17 million.

Similarly, when Baltimore’s IT network was crippled in May, the city's mayor, Bernard C. “Jack” Young, refused to pay the ransom—13 bitcoins, the equivalent of $76,280 at the time, and around $151,599 now. Instead, the city undertook the time-consuming and labor-intensive process to rebuild the IT network and restore from backups, even though that meant leaving city employees without access to email for weeks and taking down systems for paying water bills and parking tickets. The system for paying water bills is expected to be back sometime in August.

Recovery cost the city over $18 million: $10 million to rebuild the systems and $8 million in lost revenue in interest and penalties.

“Paying ransoms only gives incentive for more people to engage in this type of illegal behavior,” Young said, who proposed the conference resolution. Las Vegas mayor Carolyn G. Goodman was the co-sponsor.

Attackers are banking on the fact that many victims—municipalities, enterprises, and individual users—don’t have good backups. In many cases, these victims may not have any way to recover—or recreate—the data. While it’s easy to advise against paying ransoms, if there is no other way to recover crucial information, the payment becomes the only option.

Paying ransomware sets a dangerous precedent and it’s very troubling that, in a way, it became the norm for local government," said Mickey Bresman, CEO of Semperis. "It’s easy to understand how the decision of not paying is a very hard one to make, because there is just so much at stake.

Paying the ransom is just one part of the process, and the least painful part. Victims receive the decryption key to unlock the data when they pay, but the IT team still has to rebuild the network and use the key to restore the files. Ransomware incidents typically wind up costing millions of dollars because of the work involved to restore the data, regardless of whether the recovery is coming from backups or the decryption key. In Baltimore’s case, the Federal Bureau of Investigation advised Sheryl Goldstein, the mayor’s deputy chief of staff for operations, to not pay the ransom because the city would “bear much of these costs” whether or not the ransom was paid, Ars Technica reported.

While many municipalities are paying via cyber-insurance policies, taxpayers still have to bear the bulk of the costs of recovery. Similarly, enterprises may rely on their insurance policies as part of their strategy , but that may not cover the costs associated with lost revenue, downtime, and rebuilding.

“Paying ransoms only gives incentive for more people to engage in this type of illegal behavior.”

There is also no guarantee the criminals will honor the deal. Weeks after paying, Lake City still has not recovered all its files, The New York Times reported. That was one of the reasons the Baltimore mayor gave for not paying the ransom.

“If we paid there was no guarantee that we were gonna get the keys to all of our system,” Young said recently.

]]>
<![CDATA[Apple Removes Zoom Web Server From Macs]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/apple-removes-zoom-web-server-from-macs https://duo.com/decipher/apple-removes-zoom-web-server-from-macs Thu, 11 Jul 2019 00:00:00 -0400

The backlash from a public security incident can be swift and brutal, a lesson that executives at Zoom are learning this week as the result of the disclosure of a weird, nasty bug in the company’s video conferencing client for Macs. The company’s CEO said Wednesday that Zoom “we misjudged the situation and did not respond quickly enough” to the vulnerability report and is taking measures to fix that, including establishing a public vulnerability disclosure program.

The problem began in earnest on July 8 when security researcher Jonathan Leitschuh published a long piece describing both the weakness in the Zoom client for macOS and the disclosure and remediation process he went through with the Zoom security team over the preceding three months. Neither description was especially pretty. Leitschuh found a pair of vulnerabilities in the client, the most serious of which could allow an attacker to force a victim to join a Zoom call with video turned on. More importantly, he also discovered that the Zoom client installs a local web server that remains on the machine even after the user uninstalls the client.

The web server is used for a variety of things, but it’s at the heart of the flaw Leitschuh found and the revelation that it stays behind after the client is gone angered users and mystified security researchers. After Leitschuh’s disclosure, Zoom officials said they didn’t have a simple way to help users delete both the client and the web server. On July 9, the company issued an update that included a one-click method for removing both the client and the local web server.

On Wednesday, Apple made its own move, pushing an update that took the web server off Macs.

“Apple issued an update to ensure that the Zoom web server is removed from all Macs, even if the user did not update their Zoom app or deleted it before we issued our July 9 patch. Zoom worked with Apple to test this update, which requires no user interaction,” Zoom CEO Eric Yuan said in a post Wednesday.

That’s a start, but Yuan said that the company also is working on a second update, to be released this weekend, that will give users more control of the video settings in the client. One of the issues that Leitschuh found was that an attacker could create a meeting and opt to have other people join with video turned on. The next update will address that.

“Our current escalation process clearly wasn’t good enough in this instance."

“With this release, first-time users who select “Always turn off my video” will automatically have their video preference saved. The selection will automatically be applied to the user’s Zoom client settings and their video will be OFF by default for all future meetings,” Yuan said.

Beyond the vulnerability itself, Leitschuh also detailed his back-and-forth with the Zoom security team after his initial disclosure to the company in March. The Zoom team confirmed the bug and offered Leitschuh a payment as part of its private bug bounty program, but he declined as the terms prevented him from disclosing the details even after the bug was patched. Leitschuh had discussions with the company about the bug and potential fixes for several months and eventually, after 90 days, Zoom issued a fix, which turned out to be incomplete. A regression issue soon after caused even more problems and Leitschuh ended up disclosing the details on July 8.

Yuan acknowledged on Wednesday that Zoom hadn’t responded properly to Leitschuh’s disclosure. The company plans to start a public disclosure program and is changing its internal processes, as well.

“Our current escalation process clearly wasn’t good enough in this instance. We have taken steps to improve our process for receiving, escalating, and closing the loop on all future security-related concerns,” Yuan said.

]]>
<![CDATA[GDPR Impact Lies in Big Fines, Process Changes]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/gdpr-impact-lies-in-big-fines-process-changes https://duo.com/decipher/gdpr-impact-lies-in-big-fines-process-changes Wed, 10 Jul 2019 00:00:00 -0400

European regulators are showing they are serious about the new privacy and data security regulations as they slap hefty fines against Marriott and British Airways for not properly safeguarding consumer data.

The big question when the European Union’s General Data Protection Regulation took effect last May was whether organizations would take the requirements seriously and change how they handle consumer data, or if they would just include the penalties as part of the cost of doing business. British Airways have to pay £183.39 million (or $230 million) in penalties for a 2018 data breach impacting 500,000 customers. Marriott International has been fined £99,200,396 (or $124.2 million) because unauthorized individuals had access to the guest reservation database and were able to exfiltrate customer data for years.

“The GDPR makes it clear that organizations must be accountable for the personal data they hold. This can include carrying out proper due diligence when making a corporate acquisition, and putting in place proper accountability measures to assess not only what personal data has been acquired, but also how it is protected,” said United Kingdom’s Information Commissioner Elizabeth Denham in a statement of the intention to fine Marriott.

That’s over $350 million, or €314 million, in proposed sanctions against just two companies (Marriott plans to appeal, so the final penalty may change). Under the new rules, EU regulators can levy fines of up to 4 percent of an organization’s annual global revenue, or £17.9 million ($22.5 million), whichever is greater. The BA fine, which is 1.5 percent of the airline’s 2017 revenue, was the biggest ever issued by the ICO and the first after GDPR went into effect. In context, before last year, the largest fine from the UK ICO was £500,000, or $625,000.

The fact that the fines are so large will make it harder for organizations to defer security investments or shrug off security decisions as “not important right now.” Security performance has to be measured and managed in the same way as other business issues. The price to pay for not doing is getting higher.

“These fines make it clear -- executives and boards are responsible and accountable for cybersecurity,” said Jake Olcott, vice-president at BitSight, a cybersecurity ratings company.

Organizations now have a clear picture of what it would cost them if they decide to delay making security improvements, or don’t fully assess their procedures to understand what they are doing, said Tim Mackey, principal security strategist CyRC at Synopsys. “These efforts range from secure development practices, up to date threat models, identification of dependency risks all the way through to penetration tests and comprehensive security audits,” he said.

]]>
<![CDATA[Zoom Bug Allowed Access to Mac Webcam]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/zoom-patches-bug-that-allowed-access-to-mac-webcam https://duo.com/decipher/zoom-patches-bug-that-allowed-access-to-mac-webcam Tue, 09 Jul 2019 00:00:00 -0400

A researcher has discovered a vulnerability in the Zoom video conferencing client for Macs that allows an attacker to force someone to join a call with video enabled, giving the attacker access to the victim’s webcam without permission.

Zoom has implemented a fix that prevents an attacker’s access to the webcam already, and users also can select a setting in the client that automatically turns video off whenever they join a new call.

The bug is the result of a confluence of a couple of features and design decisions. When the Zoom client is installed on a Mac, it automatically installs a small web server that is designed to respond to requests from the local machine. That server remains on the machine even if the Zoom client is uninstalled, and it can be used to reinstall the client automatically. Security researcher Jonathan Leitschuh, who discovered the bug, found that the server has an odd behavior that sends a small image file to the client when the user clicks on a link to join a meeting. The size of the image actually dictates a status code from the server.

“One question I asked is, why is this web server returning this data encoded in the dimensions of an image file? The reason is, it’s done to bypass Cross-Origin Resource Sharing (CORS). For very intentional reasons, the browser explicitly ignores any CORS policy for servers running on localhost,” Leitschuh wrote in his explanation of the vulnerability.

CORS is used to define which resources a web page can request from outside domains. AJAX requests are forbidden explicitly, and so to get around that prohibition and allow users to join a meeting without having to click on a dialog box confirming that they want to open the Zoom client, Zoom made a design decision to get around it.

“This is a workaround to a change introduced in Safari 12 that requires a user to confirm that they want to start the Zoom client prior to joining every meeting. The local web server enables users to avoid this extra click before joining every meeting. We feel that this is a legitimate solution to a poor user experience problem, enabling our users to have faster, one-click-to-join meetings. We are not alone among video conferencing providers in implementing this solution,” Richard Farley of Zoom wrote in a post on the company’s response to the vulnerability.

“The short story is, an updated client and setting your web cam to not automatically start makes this ‘zero day’ go away.”

Leitschuh found that by embedding just one line of code into a website, he could force a victim on a Mac to join a meeting he had created. The other half of the equation is turning on the victim’s webcam. The default behavior when a host creates a new meeting is to allow the host to specify whether the other participants’ video is enabled when they join, so by selecting that option, Leitschuh could create a meeting that automatically added victims with their video enabled. However, if a user has disabled the setting in her client that starts video when joining a meeting, this method can’t override that setting.

Zoom implemented a fix that prevents an attacker from forcing the victim’s camera to turn on, but the attacker could still force a victim to join a call. Leitschuh said the code to do this could be used in any number of ways.

“This could be embedded in malicious ads, or it could be used as a part of a phishing campaign. If I were actually an attacker, I’d probably invest some time to also include the incrementing port logic that the code in the Javascript running on Zoom’s site,” he wrote.

In a statement Tuesday, Zoom officials said they're working on a method to allow people to delete the client and the web server.

"We do not currently have an easy way to help a user delete both the Zoom client and also the Zoom local web server app on Mac that launches our client. The user needs to manually locate and delete those two apps for now. This was an honest oversight. As such, by this weekend we will introduce a new Uninstaller App for Mac to help the user easily delete both apps," Farley said.

To be clear, Zoom honors the user’s Meeting settings. If the user has checked the video OFF option in their user settings, this cannot be overridden by the host or any other participant.

Leitschuh also found a bug that allowed him to send Mac Zoom clients into an endless loop, but Zoom patched that flaw in version 4.4.2.

Tod Beardsley, research director at security firm Rapid7, said that much of the problem lies with the way that browsers handle CORS policies for localhost domains, and that the existence of simple mitigations for the Zoom issues reduces the actual threat for users.

“For starters, there's a (non-default) configuration setting that seems to totally mitigate this issue. At any rate, given the existence of this mitigation, the bug actually seems to be down in the browser, not the Zoom client, where CORS policies aren't enforced for localhost domains. This has been known for several years,” Beardsley said.

“The short story is, an updated client and setting your web cam to not automatically start makes this ‘zero day’ go away.”

]]>
<![CDATA[iMessage Flaw Can Brick iPhones]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/imessage-flaw-can-brick-iphones https://duo.com/decipher/imessage-flaw-can-brick-iphones Mon, 08 Jul 2019 00:00:00 -0400

If you’re one of the people who delays updating your iPhone for a couple of months, you might want to reconsider that policy. There’s a serious issue with the iMessage service that can allow an attacker to completely disable an iPhone just by sending one specially formed message to the device.

A researcher with Google’s Project Zero team, Natalie Silvanovich, discovered the issue in April and reported it to Apple. Apple fixed the issue in iOS 12.3, but the details of the vulnerability have only just become public. In essence, the bug is a problem with the way that iMessage handles a specific type of input.

“On a Mac, this causes soagent to crash and respawn, but on an iPhone, this code is in Springboard. Receiving this message will case Springboard to crash and respawn repeatedly, causing the UI not to be displayed and the phone to stop responding to input,” Silvanovich wrote in her bug report.

“This condition survives a hard reset, and causes the phone to be unusable as soon as it is unlocked. The only way I could find to fix the phone is to reboot into recovery mode and do a restore. This causes the data on the device to be lost though.”

The vulnerability exists in iOS versions prior to 12.3, which was released in May. People who have automatic updates enabled or have updated their devices manually since the release are protected already.

“This condition survives a hard reset, and causes the phone to be unusable as soon as it is unlocked."

This kind of vulnerability can be especially dangerous as it doesn’t require an attacker to have physical access to a target device, nor does it require any interaction from the victim. Just sending a malicious message to a vulnerable device is enough to trigger the bug, making the device unresponsive. The victim likely would have no indication of why the phone has been bricked. Recovering from an exploit against this vulnerability would be painful, as Silvanovich said in her bug report.

“For testing purposes, there are three ways that I found to unbrick the device:

1) wipe the device with 'Find my iPhone' 2) put the device in recovery mode and update via iTunes (note that this will force an update to the latest version) 3) remove the SIM card and go out of Wifi range and wipe the device in the menu,” she said.

For anyone who hasn’t updated to iOS 12.3, the time to do so is now, especially with details of the vulnerability now public.

]]>
<![CDATA[US Cyber Command Warns of Targeted Attacks On Old Outlook Flaw]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/us-cyber-commands-warns-of-targeted-attacks-on-old-outlook-flaw https://duo.com/decipher/us-cyber-commands-warns-of-targeted-attacks-on-old-outlook-flaw Wed, 03 Jul 2019 00:00:00 -0400

A group of attackers, possibly linked to the APT33 group associated with the Iranian government, is exploiting a two-year-old vulnerability in Microsoft Outlook to install several different pieces of malware on compromised servers.

The ongoing attacks have drawn the attention of the U.S. Cyber Command, which issued a warning about the activity on Tuesday. The warning does not specify what kind of organizations have been targeted, but Cyber Command focuses on attacks on government agencies and not private companies. The warning said that the malware is being delivered from one particular domain and Cyber Command has uploaded samples of the malware to the VirusTotal community site.

"USCYBERCOM has discovered active malicious use of CVE-2017-11774 and recommends immediate #patching. Malware is currently delivered from: 'hxxps://customermgmt.net/page/macrocosm'," the warning says.

Researchers at Chronicle, the security firm started by Google’s parent company Alphabet and just acquired by Google Cloud, were able to connect those samples to previous activity by APT33.

"The executables uploaded by CyberCom appear to be related to Shamoon2 activity, which took place around January of 2017. These executables are both downloaders that utilize powershell to load the PUPY RAT. Additionally, CyberCom uploaded three tools likely used for the manipulation and of exploited web servers,” said Brandon Levene, head of applied intelligence at Chronicle.

“Each tool has a slightly different purpose, but there is a clear capability on the part of the attacker to interact with servers they may have compromised. If the observation of CVE-2017-11774 holds true, this sheds some light on how the Shamoon attackers were able to compromise their targets. It was highly speculated that spear phishes were involved, but not a lot of information around the initial vectors was published."

APT33 is a group associated with Iranian intelligence services, and has been known to use the PUPY RAT malware in the past. Researchers at FireEye did a detailed analysis of similar activity from APT33 last year, right around the same time that Shamoon attacks resurfaced. Shamoon is a wiper malware that destroys compromised machines. There was speculation at the time that the APT33 attacks and Shamoon activity were connected.

“Recent public reporting indicated possible links between the confirmed APT33 spear phishing and destructive SHAMOON attacks; however, we were unable to independently verify this claim. FireEye’s Advanced Practices team leverages telemetry and aggressive proactive operations to maintain visibility of APT33 and their attempted intrusions against our customers. These efforts enabled us to establish an operational timeline that was consistent with multiple intrusions Managed Defense identified and contained prior to the actor completing their mission,” FireEye’s analysis from December 2018 says.

APT33 was using the same Outlook vulnerability back then that the attackers identified by Cyber Command are using. In the earlier attacks that FireEye analyzed, the adversaries were using a variety of techniques to compromise mail servers, including the use of legitimate stolen credentials and exploitation of the Outlook vulnerability (CVE-2017-11774).

“Based on our experience, this particular method may be more successful due to defenders misinterpreting artifacts and focusing on incorrect mitigations. This is understandable, as some defenders may first learn of successful CVE-2017-11774 exploitation when observing Outlook spawning processes resulting in malicious code execution,” the FireEye analysis says.

“When this observation is combined with standalone forensic artifacts that may look similar to malicious HTML Application (.hta) attachments, the evidence may be misinterpreted as initial infection via a phishing email. This incorrect assumption overlooks the fact that attackers require valid credentials to deploy CVE-2017-11774, and thus the scope of the compromise may be greater than individual users' Outlook clients where home page persistence is discovered.”

In a statement Wednesday, FireEye said the activity in Cyber Command's warning is from APT33.

“FireEye has observed and publicly shared evidence of multiple Iranian hackers using the Outlook CVE-2017-11774 exploit for the past year. FireEye attributes the indicators in U.S. CYBERCOM’s CVE-2017-11774 warning to APT33," the statement says.

Adversary exploitation of CVE-2017-11774 continues to cause confusion for many security professionals. If Outlook launches something malicious, a common assumption is that the impacted user has been phished – which is not what is occurring here. The organization may waste valuable time without focus on the root cause. Before being able to exploit this vector, an adversary needs valid user credentials. For APT33, these are often obtained through password spraying.

]]>
<![CDATA[Researchers Uncover Long-Term Facebook Malware Campaign]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/researchers-uncover-long-term-facebook-malware-campaign https://duo.com/decipher/researchers-uncover-long-term-facebook-malware-campaign Tue, 02 Jul 2019 00:00:00 -0400

Facebook has been a favored platform for attackers wishing to target specific groups of people for many years now, and though the company has made moves to rein in malicious activity on the platform, threat actors are always finding nooks and crannies to hide in. A recent campaign--discovered by researchers and dismantled--pushing Windows and Android malware through a network of Facebook pages targeting users in Libya shows how simple it can be in some cases for attackers to push their wares on vulnerable users.

The malware campaign employed an extensive network of Facebook pages and other resources, some of which were set up in the name of a prominent Lbyan military figure, Khalifa Haftar, the commander of the Libyan army. Researchers at Check Point discovered a Facebook page purporting to be operated by Haftar that was created in April. The page had more than 11,000 followers and published posts that included links, supposedly to leaked intelligence material. Those links actually led to downloads for malware targeting Android devices or Windows machines.

“The threat actor opted for open source tools instead of developing their own, and infected the victims with known remote administration tools (RATs) such as Houdini, Remcos, and SpyNote, which are often used in run-of-the-mill attacks,” Check Point’s analysis says.

“In our case, the malicious samples would usually be stored in file hosting services such as Google Drive, Dropbox, Box and more.”

The researchers discovered a pattern of grammatical errors and misspellings in the Facebook posts as well as on a blog that used Haftar’s name. The errors displayed a specific group of mistakes, and the Check Point team was able to identify more than 30 other Facebook pages with some of the same content that also were spreading the links leading to malware. All of the pages had content targeting Libyans.

“Looking at the activity over the years, it seems that the threat actor gained access to some of the pages after they were created and operated by the original owners for a while (perhaps by compromising a device belonging to one of the administrators). The pages deal with different topics but the one thing they have in common is the target audience that they seem to be after: Libyans. Some of the pages impersonate important Libyan figures and leaders, others are supportive of certain political campaigns or military operations in the country, and the majority are news pages from cities such as Tripoli or Benghaz,” Check Point’s analysis says.

“In total, there are more than 40 unique malicious links used by the attacker over the years, which were shared in those pages. When visualizing the connections between the pages and the URLs used in different phases of this operation, we found that the malicious activity was highly intertwined as many of the links were spread by more than one page.”

The malware campaign carried on for several years and the researchers were able to determine that some of the malicious links were clicked several thousand times each. The attackers also utilized some compromised websites in Morocco and Russia, as well as the site of a Libyan mobile carrier, to hose the malware they were delivering. All of the malware samples used the same command-and-control server and the Check Point researchers were able to dig into the WHOIS records and other information to find an email address and a personal Facebook page that appear to belong the attacker behind the campaign, who used the handle Dexter Ly.

“This account repeated the same typos that we have observed in the involved pages, enabling us to assess with high confidence that this is the same person that wrote the posts’ content. The account also openly shared almost every aspect of this malicious activity, including screenshots from the panels where the victims were managed,” Check Point’s analysis says.

“The attacker shared sensitive information they were able to get their hands on from infecting victims. This included secret documents belonging to Libya’s government, exchanged e-mails, phone numbers belonging to officials and even pictures of the officials’ passports.”

Check Point’s team shared its findings with Facebook security officials, who were able to take down the pages involved in the campaign.

]]>
<![CDATA[No Public BlueKeep Exploit Yet, But Clock is Ticking]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/no-public-bluekeep-exploit-yet-but-clock-is-ticking https://duo.com/decipher/no-public-bluekeep-exploit-yet-but-clock-is-ticking Mon, 01 Jul 2019 00:00:00 -0400

Sophos Labs joins the growing list of organizations that have developed a BlueKeep proof of concept in recent weeks. Details are being held back to give enterprise defenders time to update vulnerable Windows systems before a potential attack, but it may just be a matter of time before the flaw gets exploited in an active attack or a public exploit becomes available.

Microsoft fixed the remote code execution vulnerability in the Remote Desktop Services components in older versions of Windows back in May. The vulnerability, which researchers have dubbed “BlueKeep” (CVE-2019-0708), affects older versions such as XP, Vista, Windows 7, and both 32-bit and 64-bit versions of Windows Server 2008, allows an unauthenticated user to access the system via RDP and issue commands to install software, view and modify data, and creating new user accounts. Microsoft released updates for legacy Windows versions over concerns that a worm could potentially exploit the flaw and spread quickly across different networks.

The fact that Microsoft held back some details about the vulnerability bought enterprise defenders some time since it would be harder for malware developers to figure out how to create a working exploit. While several researchers have developed their own working proofs-of-concept, they, too, have refrained from publishing the exploits or discussing the specifics of what they did.

This level of reticence is unusual, and underscores how concerned security professionals are about the possibility of a repeat of WannaCry—where a worm exploited a known Windows vulnerability and crippled organizations around the world within hours. With data showing that attackers target vulnerabilities that have exploit code publicly available, it makes sense that holding back on making the details public would delay the attacks enough to get more systems patched.

State of Research

Each proof-of-concept thus far illustrated that this vulnerability could be used to cause a lot of damage. There is public code capable of crashing Windows and triggering a “blue screen of death” error, but researchers have shown different ways this vulnerability could be exploited.

Sophos released a video showing an exploit developed by SophosLabs’ Offensive Research team, which “works in a completely fileless fashion, providing full control of a remote system without having to deploy any malware” and does not require an active session on the target. The video shows a script attempt to start an RDP session to the target Windows 7 virtual machine and trigger the vulnerability to establish a connection to an elevated command shell (with SYSTEM-level privileges). The video then shows the researcher invoking that command shell and gaining full control over the machine without needing valid credentials.

“We hope this video convinces individuals and organizations who still haven’t patched that the BlueKeep vulnerability is a serious threat,” said Andrew Brandt, principal researcher at Sophos. The analysts in the Offensive Research group characterized the vulnerability’s difficulty level as “intermediate” and "within reach of adversaries who have more time than money," Brandt said.

RiskSense senior security researcher Sean Dillon (zerosum0x0) created a private Metasploit module where he combined BlueKeep with Mimikatz. A potential attacker using the module would receive elevated System privileges and access to all the passwords for other machines on the same network. Someone else going by the name Straight Blast on Twitter claimed to have successful exploit against Windows 7.

The Department of Homeland Security's Cybersecurity and Infrastructure Security Agency (CISA) said it had successfully tested what may be the first remote code execution exploit for BlueKeep against a Windows 2000 machine.

"CISA has coordinated with external stakeholders and determined that Windows 2000 is vulnerable to BlueKeep," CISA said.

The prevailing assumption among security professionals seems to be that it is a matter of time before a public exploit is available. The code could come from a researcher who made the details public or from an active attack campaign targeting the flaw. The sheer amount of discussion among researchers (and quite likely, the attackers, too) makes the likelihood of a public exploit being developed more likely, said Jonathan Cran, head of research at Kenna Security. While there is a chance nothing happens because chatter is just noise, it is still worth paying attention to.

“Chatter is a great leading indicator of what will happen,” Cran said.

It is possible an exploit is already making the rounds within the attacker community, and the attack just hasn’t been detected yet. Just because there hasn’t been signs of one yet doesn’t mean it doesn’t already exist. They could be waiting for the right time. Maybe an exploit will never appear—but hoping for that outcome isn't a winning strategy.

“Microsoft is confident that an exploit exists for this vulnerability,” Pope wrote in one of his advisories.

Even if an exploit never comes to light, some vulnerabilities should be patched regardless of the availability of public code, and BlueKeep is one of them, Cran said. The fact that the flaw can be exploited without user interaction, offers remote code execution, and is in a commonly deployed protocol puts BlueKeep in that bucket. Remote Desktop is enabled by default, which means there’s a “large attacker opportunity,” Cran said.

“It is possible that we won’t see this vulnerability incorporated into malware,” Microsoft’s Pope said. “But that’s not the way to bet.”

Repeated Warnings

After the patch was released in May, Microsoft issued a security advisory in late May and another reminder-warning in June to apply the update as soon as possible because attackers could cause a lot of damage with this vulnerability.

“It only takes one vulnerable computer connected to the internet to provide a potential gateway into these corporate networks, where advanced malware could spread, infecting computers across the enterprise,” MSRC wrote.

Even the NSA raised the alarm. “It is likely only a matter of time before remote exploitation code is widely available for this vulnerability,” the NSA warned in its advisory.

The Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) also released a warning. "A BlueKeep exploit would be capable of rapidly spreading in a fashion similar to the WannaCry malware attacks of 2017," CISA said in its alert.

However, it isn't clear if the repeated warnings are having an impact on the number of systems being patched. Shortly after the vulnerability was made public, Errata Security's Robert Graham created a BlueKeep scanning tool and found “roughly 950,000 machines” that were vulnerable on the public Internet. When Graham re-ran the tool 48 hours later upon Wired's request, he found that just a thousand machines had been patched. "If that very roughly estimated rate were to continue...it would take 10 years for all the remaining vulnerable machines to be patched," Wired reported at the time.

About a month after Graham's initial Internet scan, the number of vulnerable systems doesn't appear to have changed all that much, according to recent figures from risk management company BitSight. The research team used Graham’s tool and found 1.59 million systems have the updated, and could not determine the patching status for another 1.3 million systems because they had enabled network-level authentication in Windows. Enabling NLA is a good mitigation as it prevents unauthorized access via RDP, but it also means the scanning tool can't collect information about it. China and the United States had the most number of vulnerable systems.

Interestingly, European countries, namely Germany, the United Kingdom, the Netherlands, and France, have patched over 75 percent of systems that were vulnerable. They seem best prepared for a BlueKeep worm.

One way to look at the number of vulnerable systems is that organizations aren't listening to the warnings. The other way is to realize that security teams may still be testing the updates, and are on track to deploy them soon. Most organizations tend to take about 90 days to apply the updates, so it may be that the number will drop significantly after the 90-day mark, which would be in August.

I don't think we breathe a sigh of relief until we're at (at least) 50 percent patched, which on average takes 90 days in large organizations," Cran said. "My hope is that all the chatter and awareness helps speed that number in a significant way.

It is risky to get too attached to that 900,000 number, since the scanning tool can find only systems that are externally exposed to the Internet. Systems inside the network, behind the firewall, would not be visible to the scanner, but would be still vulnerable if the exploit somehow lands in the network. It is quite possible the number of potentially exploitable machines are much, much, higher.

When trying to prioritize patching, organizations should think about likelihood of impact. BlueKeep being expoited is highly likely, with a "potentially massive impact," Crain said. The impact is so high that even if the flaw had a lower likelihood of being exploited, it should still score highly on the risk calculus on where to apply effort.

“Don't panic, and keep patching based on risk,” Cran said.

]]>
<![CDATA[OpenPGP Certificate Attack Worries Experts]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/openpgp-certificate-attack-worries-experts https://duo.com/decipher/openpgp-certificate-attack-worries-experts Mon, 01 Jul 2019 00:00:00 -0400

There’s an interesting and troubling attack happening to some people involved in the OpenPGP community that makes their certificates unusable and can essentially break the OpenPGP implementation of anyone who tries to import one of the certificates.

The attack is quite simple and doesn’t exploit any technical vulnerabilities in the OpenPGP software, but instead takes advantage of one of the inherent properties of the keyserver network that’s used to distribute certificates. Keyservers are designed to allow people to discover the public certificates of other people with them they want to communicate over a secure channel. One of the properties of the network is that anyone who has looked at a certificate and verified that it belongs to another specific person can add a signature, or attestation, to the certificate. That signature basically serves as the public stamp of approval from one user to another.

In general, people add signatures to someone’s certificate in order to give other users more confidence that the certificate is actually owned and controlled by the person who claims to own it. However, the OpenPGP specification doesn’t have any upper limit on the number of signatures that a certificate can have, so any user or group of users can add signatures to a given certificate ad infinitum. That wouldn’t necessarily be a problem, except for the fact that GnuPG, one of the more popular packages that implements the OpenPGP specification, doesn’t handle certificates with extremely large numbers of signatures very well. In fact, GnuPG will essentially stop working when it attempts to import one of those certificates.

Last week, two people involved in the OpenPGP community discovered that their public certificates had been spammed with tens of thousands of signatures--one has nearly 150,000--in an apparent effort to render them useless. The attack targeted Robert J. Hansen and Daniel Kahn Gillmor, but the root problem may end up affecting many other people, too.

“This attack exploited a defect in the OpenPGP protocol itself in order to ‘poison’ rjh and dkg's OpenPGP certificates. Anyone who attempts to import a poisoned certificate into a vulnerable OpenPGP installation will very likely break their installation in hard-to-debug ways. Poisoned certificates are already on the SKS keyserver network. There is no reason to believe the attacker will stop at just poisoning two certificates. Further, given the ease of the attack and the highly publicized success of the attack, it is prudent to believe other certificates will soon be poisoned,” Hansen wrote in a post explaining the incident.

“This attack cannot be mitigated by the SKS keyserver network in any reasonable time period. It is unlikely to be mitigated by the OpenPGP Working Group in any reasonable time period. Future releases of OpenPGP software will likely have some sort of mitigation, but there is no time frame. The best mitigation that can be applied at present is simple: stop retrieving data from the SKS keyserver network.”

“So none of this is a novel or surprising problem. However, the scale of spam attached to certificates recently appears to be unprecedented.”

SKS, or synchronizing key server, is the software used to run keyservers, and the keyserver network itself is a distributed network with no central authority. The system was designed that way on purpose as it allows for synchronization of certificates among the various servers and provides resistance against an attack on one server. However, the architecture also allows the certificate spamming or flooding attack that affected Hansen and Gillmor, something that has been known for many years. There have been other such attacks in the past, but Gillmor said this incident looks different.

“SKS is known to be vulnerable to this kind of Certificate Flooding, and is difficult to address due to the synchronization mechanism of the SKS pool. (SKS's synchronization assumes that all keyservers have the same set of filters),” Gillmor, a contributor to free software projects and a senior staff technologist at the American Civil Liberties Union, wrote in a post.

“So none of this is a novel or surprising problem. However, the scale of spam attached to certificates recently appears to be unprecedented.”

GnuPG is used in a variety of applications, including some encrypted email and chat programs. But it’s also used extensively in signing software packages, something that a certificate flooding attack could wreak havoc with.

“The number one use of OpenPGP today is to verify downloaded packages for Linux-based operating systems, usually using a software tool called GnuPG. If someone were to poison a vendor's public certificate and upload it to the keyserver network, the next time a system administrator refreshed their keyring from the keyserver network the vendor's now-poisoned certificate would be downloaded. At that point upgrades become impossible because the authenticity of downloaded packages cannot be verified,” Hansen said.

Matthew Green, a cryptographer and associate professor at Johns Hopkins University, said that the attack points out some of the weaknesses in the entire OpenPGP infrastructure.

"PGP is old and kind of falling apart. There's not enough people maintaining it and it's full of legacy code. There are some people doing the lord's work in keeping it up, but it's not enough," Green said. "Think about it like an old hospital that's crumbling and all of the doctors have left but there's still some people keeping the emergency room open and helping patients. At some point you have to ask whether it's better just to let it close and let something better come along.

“I think PGP is preventing the development of better stuff and the person who did this is clearly demonstrating this problem.”

The certificate flooding attack on Hansen and Gillmor already has had some consequences for other people. Gillmor said that several people he knows have had serious issues because they had his certificate in their keyrings and refreshed them, which resulted in the spammed certificate being imported. Though the certificate spamming issue was known, it was never addressed because of a variety of barriers, including the fact that the keyserver system generally worked. But Gillmor said the attacks illustrate both the fragility and necessity of projects such as OpenPGP.

“One of the points I've been driving at for years is that the goals of much of the work I care about (confidentiality; privacy; information security and data sovereignty; healthy communications systems) are not individual goods. They are interdependent, communally-constructed and communally-defended social properties,” he said.

“As an engineering community, we failed -- and as an engineer, I contributed to that failure -- at protecting these folks in this instance about because we left things sloppy and broken and supposedly ‘good enough’.”

Green said that while OpenPGP and the tools that depend on it have value still, they shouldn't be the best option for people in high-risk situations.

“People make a big deal out of why it's so important but in practice if people's lives are being put at risk because of this, it can't be that important. This tool can't be what's protecting activists if it can be broken like this.”

]]>
<![CDATA[Google Cloud Takes Chronicle, Future of VirusTotal Murky]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/google-cloud-takes-chronicle-future-of-virustotal-murky https://duo.com/decipher/google-cloud-takes-chronicle-future-of-virustotal-murky Fri, 28 Jun 2019 00:00:00 -0400

Eighteen months after Alphabet’s “X” moonshot factory officially launched Chronicle as a separate enterprise security company, the startup is being folded into Google Cloud and its products are becoming part of Google’s security portfolio.

“Chronicle’s products and engineering team complement what Google Cloud offers,” said Google Cloud CEO Thomas Kurian. Chronicle’s security tools will be fully integrated into Google Cloud by the fall.

Bringing Chronicle and its security intelligence and analytics capabilities—in the form of malware and virus scanning service VirusTotal and Backstory, the SIEM-on-steroids platform launched in April—into Google Cloud makes a lot of sense, but enterprises should pay careful attention to the upcoming integration and the future of Chronicle products.

As enterprises move more of their workloads into cloud infrastructure, they are looking for tools to secure them. VirusTotal will be a “powerful addition to the pool of threat data informing Google Cloud offerings,” Kurian said, and will be used to support applications running on the platform. Backstory, the cloud service that lets enterprises upload and analyze internal security telemetry data, helps customers detect and mitigate threats. Backstory’s investigation features combined with Google Cloud’s detection, incident management and remediation capabilities, will help customers protect both their cloud and on-premises environments.

“At Google Cloud, our customers’ need to securely store data and defend against threats—either in the cloud or on premise—is a top priority,” Kurian wrote.

VirusTotal's Next Act?

What’s not known is the future of VirusTotal as a stand-alone service. VirusTotal was already an important resource for malware researchers as well as for enterprise defenders when Google acquired the service in 2012. Since then VirusTotal has kept to its mission of being the “source of truth for malware.” It is unclear from current public statements whether VirusTotal will become one of the back-end tools available for Google Cloud customers, or if the service will continue to be maintained and used as a stand-alone service.

Many companies offer cloud and private-hosted versions of their tools, so there is plenty of precedent. Kubernetes is a good example of a product that can be used as part of a larger service or as a stand-alone platform. Google declined to comment and just pointed to the two blog posts from Kurian and Chronicle CEO Stephen Gillett.

It’s quite possible Google hasn’t figured out how the integration would look yet, but defenders and researchers will be watching. Losing VirusTotal and its repository of hash information would be a big loss for malware research.

What's Up Backstory?

Backstory raises its own set of concerns. When Chronicle launched Backstory back in April, Chronicle executives were careful to emphasize that while Backstory used Google’s search technology, cloud infrastructure, storage, and compute tools, the two companies were distinct. Chronicle had separate partnership and privacy agreements with customers that forbade it from sharing data with any outside entities, including Google. Chronicle’s IT infrastructure was firewalled off from the rest of Google, and Google couldn’t see the data that enterprise customers loaded into Backstory’s private clouds.

We are firewalled off. We have a separate building, separate companies, separate entity structures, separate privacy agreements with customers," Chronicle’s Gillett said during a Q&A with journalists at RSA Conference. "Google people can't even badge into our building.

At the time, Gillett said that Chronicle was just like any other Google Cloud customer—just as Google didn’t look at what data customers stored in Google Cloud, the search giant wouldn’t look at what was being stored within Backstory. This integration revives the initial concerns about Google potentially mining the data uploaded by the enterprises, since there is now no wall between Google Cloud and Backstory.

Tech companies, and Google especially, have faced a lot of criticism for voracious data collection. Google has been accused repeatedly of violating individual privacy, from tracking user location via cellular signals and letting third-party entities scan user emails. Existing Backstory customers will also be watching the integration closely.

That, of course, is assuming Backstory will continue in its current form. It’s more likely that Backstory will just become one of the many security features and tools available to Google Cloud customers.

Growing Google Cloud

Google has been moving aggressively to expand Google Cloud over the past few months. Google’s acquisition of Cask Data led to the Google Cloud Data Fusion data pipelining tool, and the $2.6 billion acquisition of Looker will expand Google Cloud’s business intelligence capabilities. Google also recently acquired Alooma to tackle cloud migration.

Bringing Chronicle back under the Google umbrella fits with its overall cloud strategy, but it will be an unsettling period for customers.

“We approach security holistically, from the chip to the datacenter, with a continuously growing set of security capabilities that work in concert to deliver defense-in-depth at scale: from hardware infrastructure, service deployment and user identity, to storage, internet communication and security operations,” Kurian wrote.

]]>
<![CDATA[Return of the Mack: Exploit Kits Back on the Scene]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/return-of-the-mack-exploit-kits-are-back-on-the-scene https://duo.com/decipher/return-of-the-mack-exploit-kits-are-back-on-the-scene Fri, 28 Jun 2019 00:00:00 -0400

There was a time when exploit kits were the coolest kids in school, getting all the headlines and making all the money for cybercrime groups. But time and tide wait for no man, and attackers soon moved on to other tactics. But recent developments have revealed that while exploit kit activity has dropped off considerably in the last couple of years, it is by no means gone.

Exploit kits are devilishly clever creations and are essentially the utility infielders of crimeware. They typically combine exploits for a number of different vulnerabilities in several separate applications, often browsers or their components. There are dozens of different exploit kits but most of them focus on apps such as Adobe Flash or other widely installed software with a steady supply of known vulnerabilities. Attackers typically target a given website or set of sites, use a server-side vulnerability to install the kit on the site’s web server and then wait for visitors to hit the site. When they do, the exploit kit will launch various exploits against known vulnerabilities in whatever browser the victim is using.

Cybercrime groups have used these kits for lots of different operations over the years, often to install a piece of malware on the victim’s machine, such as a keylogger or even ransomware. Recently, researchers at Malwarebytes discovered an operation in which attackers were able to compromise an ad server used in online ad campaigns and infect the ads that victims see. The campaign is using the GreenFlash Sundown exploit kit, which is not one of the more well-known kits, but is dangerous nonetheless. In the campaign Malwarebytes analyzed, the kit is installing the Seon ransomware on victims’ machines but it also has the capability to deliver other malware and cryptominers. The kit uses a number of redirections and code obfuscation to hide its intent and origins.

“The redirection mechanism is cleverly hidden within a fake GIF image that actually contains a well obfuscated piece of JavaScript. The next few sessions contain more interesting code including a file loaded from fastimage[.]site/uptime.js which is actually a Flash object. This performs the redirection to adsfast[.]site which we recognize as being part of the GreenFlash Sundown exploit kit. It uses a Flash Exploit to deliver its encoded payload via PowerShell,” Jerome Segura, a researcher at Malwarebytes, wrote in an analysis of the campaign.

“Leveraging PowerShell is interesting because it allows to do some pre-checks before deciding to drop the payload or not. For example, in this case it will check that the environment is not a Virtual Machine. If the environment is acceptable, it will deliver a very visible payload in SEON ransomware.”

Another interesting aspect of this specific campaign using GreenFlash Sundown is that the actors who use the kit typically only target victims in South Korea. That was not the case here, as the campaign targeted people in Europe and North America.

At the same time that GreenFlash was reappearing, a new exploit kit called Spelevo was emerging in campaigns that compromised websites and served Flash exploits, among others, to visitors. Spelevo isn’t particularly innovative or unique, but it has the capability of exploiting vulnerabilities in multiple apps and researchers say that it has been used recently to deliver banking trojans, including the nasty Dridex malware. One campaign analyzed by researchers with Cisco’s Talos Intelligence Group targeted the web server of a B2B site and was serving several separate exploits to visitors.

“Spelevo is a relatively new exploit kit that was first seen a couple of months ago. Since its discovery, it has gone through some minor changes, including modification of URL structure and some obfuscation changes in the landing and exploit pages themselves. It makes use of a lot of common techniques for exploit kits that we've seen over the years,” said Nick Biasini of Talos.

“Unlike the Rig exploit kit, Spelevo is being hosted using domains instead of hard coded IP addresses. Additionally, they appear to be leveraging domain shadowing, a technique Talos discovered several years ago, leveraging compromised registrant accounts to host malicious activity using subdomains. Talos also found several instances of 302 cushioning where the gates and exploit kits will leverage a series of HTTP 302 redirects to eventually point to the landing page. The core functionality remains the same: Compromise anyone who interacts with it.”

Although exploit kits may no longer be the go-to move for cybercrime groups, they still have a notable presence on the threat landscape and likely will for some time to come.

CC By 2.0 license photo from James Case.

]]>
<![CDATA[Google Makes Encrypted DNS Generally Available]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/google-makes-encrypted-dns-generally-available-for-8-8-8-8 https://duo.com/decipher/google-makes-encrypted-dns-generally-available-for-8-8-8-8 Thu, 27 Jun 2019 00:00:00 -0400

As more and more websites turn on HTTPS and online communications rely on cryptographic protocols such as Transport Layer Security, the Internet is increasingly more encrypted. Except for one significant part: the Domain Name System.

DNS acts as the phonebook for the Internet and translates human-readable domain names to the actual address of the machine (numeric string for IPv4, alpha-numeric for IPv6) hosting the content or application the user is interested in. Since DNS queries are typically sent in plaintext via UDP or TCP, the entity operating the DNS server can see all the requests—essentially, the entirety of the user’s online activity. For many users and organizations, the internet service provider provides DNS, which means the ISP can monitor what websites the user visited, when the visits occurred, and what device was used.

Encrypting DNS traffic would make this kind of web surveillance harder because ISPs and other DNS providers won't be able to see what users are doing online. A number of technology companies have been working on alternatives to sending DNS queries over UDP and TCP. DNS over HTTPS, based on the Internet Engineering Task Force’s RFC 848 standard adopted last October, is perhaps the most well-known. Another is DNS over TLS.

There are several options for DNS over HTTPS, including Cloudflare with its 1.1.1.1 service, and non-profit Quad9's 9.9.9.9 service. Cisco's OpenDNS offers encrypted DNS and Mozilla has been working on its own efforts for Firefox. This week, Google announced general availability of DNS over HTTPS for its own public DNS service on 8.8.8.8.

“Today we are announcing general availability for our standard DoH service. Now our users can resolve DNS using DoH at the dns.google domain with the same anycast addresses (like 8.8.8.8) as regular DNS service, with lower latency from our edge PoPs throughout the world,” wrote Google product manager Marshall Vale and security engineer Alexander Dupuy.

Right now, if governments want to see where users are going online, they can demand to see the ISP’s records. In fact, in the United Kingdom, ISPs are required to track all the sites citizens visited for the previous 12 months under the 2016 Investigatory Powers Act (IPA). ISPs are also allowed to share the data with third-parties for content filtering and advertising purposes. Using public DNS services such as the one provided by Google (8.8.8.8) meant bypassing the ISPs, but it meant giving the data-hungry search giant access to all of the DNS requests.

Encrypted DNS queries just cuts out the ISP, or attackers lurking on the network. The DNS provider (say, Google or Cloudflare) still can see the DNS query, so there is a tradeoff on who gets to see the user's entire browing history. Cloudflare, to its credit, has pledged to keep only 24 hours worth of DNS queries, to keep the amount of data being collected low.

Along with boosting user privacy, DNS over HTTPS will reduce the threat of man-in-the-middle attacks against DNS infrastructure via DNS Spoofing, DNS Hijacking, and DNS Poisoning. By transmitting DNS queries through an encrypted HTTPS tunnel would prevent anyone from hijacking DNS queries to redirect users to some other site.

]]>
<![CDATA[The Curious Case of Silexbot]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/the-curious-case-of-silexbot https://duo.com/decipher/the-curious-case-of-silexbot Thu, 27 Jun 2019 00:00:00 -0400

A new piece of malware that is using default credentials to log into IoT devices and then erase their file systems and shut them down is on the move, but it may not end up having the reach that it’s alleged creator intended.

The malware is called Silexbot and a researcher at Akamai discovered it this week when he saw the binary on a honeypot and noticed some oddities in the code. The first thing Larry Cashdollar, a senior security researcher at Akamai, saw was a string that was a comment from the malware’s creator, saying that Silexbot was designed as a response to all of the low-level attackers who are building botnets of compromised IoT devices using publicly available malware samples. Silexbot is particularly vicious in the way that it goes after embedded devices, taking several steps to ensure that once the device is compromised, it is rendered useless.

The malware uses known, default credentials for various devices to log in to them over Telnet and then essentially destroys the device’s firmware. It first uses a command to list all of the device’s partitions and then writes random data into all of those partitions. Silexbot then removes all of the device’s network configurations and adds a firewall rule that will drop all packets going into or out of the device. It then stops the device and reboots it.

Cashdollar said the binary he recovered from his honeypot is an ARM binary and that the malware is targeting any device that looks like a Unix or Linux device. Cashdollar’s honeypot device emulates a DVR and he said he first noticed the Silexbot malware when he rebooted the honeypot after moving it to a new piece of hardware recently. What he saw when he turned the honeypot back on was the message from the Silexbot author. The message said the author was sorry for what he was doing but it had to be done to stop script kiddies from building IoT botnets.

“I’m not sure they realized some of the collateral damage that could be there. Something motivated this kid to destroy these devices.”

The Silexbot author has been identified as a 14-year-old boy and the message in the code certainly reads like a ninth-grade English teacher’s nightmare.

“I am only here to prevent skids to flex their skidded botnet I am sorry for your device but it has to be done because all the skids claiming and thinking they are some god coder + people selling spots on botnets I am getting sick of it so yeah sorry,” the message in the Silexbot code says.

IoT botnets have been a real threat for several years, most notably in the form of Mirai, which became an enormous network of infected IP cameras, DVRs, and other devices. Mirai was used in several large DDoS attacks, including one that targeted Dyn, a DNS provider in New Hampshire. That attack in October 2016 had a cascading effect that resulted in some of the most popular sites on the Internet being knocked offline, including Amazon, Twitter, the New York Times, and Spotify. Mirai was actually several smaller botnets controlled by various groups at various times, and some of the controllers would rent out access to their botnets. That’s a common occurrence in the DDoS world, with botnet controllers looking for any way to make money from their networks of compromised devices.

The Silexbot author doesn’t appear to support this particular business model. Cashdollar said the author contacted him on Twitter and expressed some remorse for his actions.

“They tracked me down on Twitter and said they didn’t realize it was going to get this much attention and that they were worried they were going to get in trouble,” Cashdollar said. “I’m not sure they realized some of the collateral damage that could be there. Something motivated this kid to destroy these devices.”

The IP address from which the Silexbot malware was delivered to Cashdollar's honeypot was on a virtual private server in Iran, but that's not necessarily a clear indication of where the creator is.

]]>
<![CDATA[Decipher Podcast: Michael Coates]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/decipher-podcast-michael-coates https://duo.com/decipher/decipher-podcast-michael-coates Wed, 26 Jun 2019 00:00:00 -0400

Michael Coates, CEO and co-founder of cloud security startup Altitude Networks, has had a long and varied career. Beginning with a stint as a consultant breaking into banks and continuing through roles as head of security assurance at Mozilla and CISO at Twitter, he has helped protect hundreds of millions of users over the years. He spoke with Dennis Fisher about what he's learned about empowering teams, the importance of making users safe by default, and the value of solving problems one at a time.

]]>
<![CDATA[Amazon Unveils Security Hub, Control Tower to Aid AWS Security]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/amazon-unveils-security-hub-control-tower-to-aid-aws-security https://duo.com/decipher/amazon-unveils-security-hub-control-tower-to-aid-aws-security Tue, 25 Jun 2019 00:00:00 -0400

Amazon is rolling out two new tools to help AWS customers create and securely configure new cloud deployments and to ensure they stay as secure as possible once they’re up and running.

The new tools, called Control Tower and Security Hub, both were unveiled at Amazon’s AWS re:Inforce conference in Boston Monday and are part of an effort to streamline the process of configuring and locking down AWS environments and accounts. Figuring out the initial levels of security and access can be complicated, especially for companies or teams that are new to AWS environments, so Amazon has developed the new services to ease that burden a bit.

Control Tower is designed as a comprehensive tool for securely setting up new AWS environments, providing a method for automating many of the tasks involved in initial setup, such as identity and access management, centralized logging, and security audits across accounts. Control Tower comprises a number of individual components, including the Landing Zone, which is the multi-account AWS environment the tool sets up; a set of default policy controls known as Guardrails; Blueprints, which are the design patterns used to establish the Landing Zone; and the Environment, which is the AWS account and all of the attendant resources set up to run an application.

"Control Tower is basically template for an entire enterprise deployment and management of a full, multi-account environment with all key security controls pre-configured. For new clients, especially small to mid-sized ones, it looks promising," said Rich Mogull, CEO of Securosis.

The Control Tower service can only be used for setting up fresh AWS accounts and there’s no extra charge for it.

“This service automates the process of setting up a new baseline multi-account AWS environment that is secure, well-architected, and ready to use. Control Tower incorporates the knowledge that AWS Professional Service has gained over the course of thousands of successful customer engagements,” said Jeff Barr, chief evangelist for AWS.

“AWS Control Tower builds on multiple AWS services including AWS Organizations, AWS Identity and Access Management (IAM) (including Service Control Policies), AWS Config, AWS CloudTrail, and AWS Service Catalog. You get a unified experience built around a collection of workflows, dashboards, and setup steps. AWS Control Tower automates a landing zone to set up a baseline environment.”

The second piece of Amazon’s security news this week is the release of Security Hub for general availability. The tool has been in preview mode until now, and is meant to function as a central dashboard for teams to monitor security alerts and issues in their AWS environments. Most enterprises have something similar on their internal networks, but cloud deployments are a different story. The variety of accounts and complexity of deployments can make managing and prioritizing security alerts a difficult task, and Security Hub is meant to take some of the burden of that off of security teams.

“When you enable AWS Security Hub, permissions are automatically created via IAM service-linked roles. Automated, continuous compliance checks begin right away. Compliance standards determine these compliance checks and rules. The first compliance standard available is the Center for Internet Security (CIS) AWS Foundations Benchmark. We’ll add more standards this year,” said Brandon West, leader of the AWS developer evangelism team.

“The results of these compliance checks are called findings. Each finding tells you severity of the issue, which system reported it, which resources it affects, and a lot of other useful metadata. For example, you might see a finding that lets you know that multi-factor authentication should be enabled for a root account, or that there are credentials that haven’t been used for 90 days that should be revoked.”

Security Hub has automation at its heart, but it also allows customers to shape it to their needs in many ways. For example, customers can create custom actions that group various findings together to create an event that can then trigger something like an alert sent to specific people.

"Security Hub is a decent start but has a long way to go. It combines AWS and third-party security dashboarding in one place. It will be an essential tool for all security organizations in AWS, even when using third-party tools that offer overlapping functionality," Mogull said.

]]>
<![CDATA[Thieves Switching to Shimmers to Steal from ATMs]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/thieves-switching-to-shimmers-to-steal-from-atms https://duo.com/decipher/thieves-switching-to-shimmers-to-steal-from-atms Tue, 25 Jun 2019 00:00:00 -0400

As chip-based payment cards become the norm, criminals are shifting tactics to use shimmers rather than skimmers to steal money from automated teller machines.

“Shimmers have been slowly nudging skimmers aside as the number of EMV implementations increases nationwide,” Flashpoint’s Isaac Palmer wrote.

Skimmers are small devices that fit over an the machine’s card reader and copy data from the card’s magnetic stripe. Criminals interested in stealing from ATMs would install these devices over the real card reader and wait for people to swipe their cards through the reader. When someone swipes a card through the reader that has been tampered with, both the card reader and the skimmer sees the information on the magnetic stripe. Criminals would then take the stolen information to create cloned cards and use them in other locations.

However, growing use of chip-based cards and the Europay Mastercard Visa (EMV) payment implementation and chip-based cards, meant skimmers are no longer as effective. Chip cards theoretically cannot be cloned because of a component in the chip--integrated circuit card verification value (iCVV) which protects against the copying of data from the chip. Instead of looking at the magnetic stripe, criminals are using shimmers, a thin-device typically positioned between the chip and the chip reader, to capture data from the chip.

“Shimmers have been slowly nudging skimmers aside as the number of EMV implementations increases nationwide,” Palmer said.

The Secret Service estimate $1 billion are stolen every year by criminals using skimming devices. This is a lucrative revenue stream and makes sense that criminals are adapting to new technology.

There was “growing interest” in shimmers in criminal forums and marketplaces, as evidenced by advertisements for custom-built shimmers and videos describing where to place shimmers, Palmer said.

One way to deal with criminals intercepting card details through the reader was to install the Card Protection Plate (CPP) inside the ATM to prevent objects from being inserted inside a reader. Bypassing CPP is difficult and “it’s highly unlikely an attacker would be able to open the device and remove the CPP,” Palmer said.

A shimmer could be thin enough to bypass CPP, and if the bank is not properly verifying transactions, such as authenticating iCVV, then criminals would be able to steal card data. However, Flashpoint said CPPs are still the best defense against ATM shimming attacks, especially if installed with an optional tamper switch. The switch will mitigate any attacks that might move or put added pressure on the CPP and trigger an alarm.

]]>