<![CDATA[Decipher]]> https://decipher.sc Decipher is an independent editorial site that takes a practical approach to covering information security. Through news analysis and in-depth features, Decipher explores the impact of the latest risks and provides informative and educational material for readers curious about how security affects our world. Fri, 24 May 2019 00:00:00 -0400 en-us info@decipher.sc (Amy Vazquez) Copyright 2019 3600 <![CDATA[New Bills Would Require Warrants for Border Device Searches]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/new-bills-would-require-warrants-for-border-device-searches https://duo.com/decipher/new-bills-would-require-warrants-for-border-device-searches Fri, 24 May 2019 00:00:00 -0400

A bipartisan group of lawmakers in both houses of Congress is pushing legislation that would require law enforcement to get a warrant before searching Americans’ devices as they cross the border back into the United States.

The new bills, which were introduced in the House of Representatives and the Senate both this week, are in response to an increase in warrantless searches of U.S. citizens’ devices at the border in recent years. Currently, law enforcement agents use an exception to the Fourth Amendment for border searches in order to search devices without a warrant. Rep. Ted Lieu (D-Calif.) introduced the House bill and Sen. Ron Wyden (D-Ore.) submitted the Senate version, saying that law enforcement agents should need to show probable cause and obtain a warrant in order to conduct searches of devices owned by Americans as they return to the country following international travel.

“The border is quickly becoming a rights-free zone for Americans who travel. The government shouldn’t be able to review your whole digital life simply because you went on vacation, or had to travel for work.” Wyden said. “It’s not rocket science: Require a warrant to search Americans’ electronic devices, so border agents can focus on the real security threats, not regular Americans.”

The introduction of the legislation comes at a time when Congress is paying more and more attention to the growing security and privacy issues associated with technology. Earlier this week, Sen. Josh Hawley (R-Mo.) introduced the Do Not Track Act, a bill that would establish a single mechanism through which people could prevent websites from tracking them as they move around the web. That measure is designed to protect people from surveillance that is largely invisible to them. The border-search bills, meanwhile, addresses surveillance that is quite obvious and more invasive.

Those bills seek to remove the Fourth Amendment exception for searches of devices at the border.

“Accessing the digital contents of electronic equipment, accessing the digital contents of an online account, or obtaining information regarding the nature of the online presence of a United States person entering or exiting the United States, without a lawful warrant based on probable cause, is unreason22 able under the Fourth Amendment to the Constitution of the United States,” the Senate bill says.

Under the proposed legislation, law enforcement agents would need a warrant to search devices in the possession of Americans coming into the country, and would not be allowed to deny entry to people who refuse to allow searches or disclose account credentials. The bills provide broad exceptions for situations that present “immediate danger of death or serious physical injury to any person”, national security threats, as well as organized crime activities. The legislation is based on a 2014 decision by the U.S. Supreme Court that law enforcement agents needed a warrant to search the devices of people who have been arrested.

Lieu, who along with Wyden has been focused on digital privacy and security issues for many years, said the legislation is necessary given the amount of sensitive information people store on their devices today.

“We must protect Americans’ privacy—whether it’s on a city sidewalk, at a border checkpoint or anywhere else in the U.S. At the border, American travelers should not be subjected to invasive searches of their electronic devices without a warrant. The Fourth Amendment guarantees this right,” Lieu said.

]]>
<![CDATA[Do Not Track Act Would Give Users More Power]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/do-not-track-act-would-give-users-more-power https://duo.com/decipher/do-not-track-act-would-give-users-more-power Thu, 23 May 2019 00:00:00 -0400

There’s yet another effort underway in Washington to establish an enforceable Do Not Track system that would provide a one-click mechanism for people to opt out of persistent web tracking by advertisers and social media platforms.

The latest push comes in the form of the Do Not Track Act, a bill unveiled this week by Sen. Josh Hawley (R-Mo.) that emulates the structure of the Do Not Call registry. It would establish a method for consumers to send a signal to online companies that would block them from collecting any information past what is necessary to deliver their services. The bill also would stop companies from building profiles of the people who activate the DNT mechanism or discriminating against them if they use the option.

Hawley’s bill makes the Federal Trade Commission the enforcement authority for the system and any person who violates the measure would be liable for penalties of $50 per user affected by a violation for every day that the violation is ongoing.

“Big tech companies collect incredible amounts of deeply personal, private data from people without giving them the option to meaningfully consent. They have gotten incredibly rich by employing creepy surveillance tactics on their users, but too often the extent of this data extraction is only known after a tech company irresponsibly handles the data and leaks it all over the internet,” Hawley said.

“The American people didn't sign up for this, so I'm introducing this legislation to finally give them control over their personal information online.”

In practice, Hawley’s proposed Do Not Track system would involve an app or extension that people could download and would then “sends the DNT signal to every website, online service, or online application to which the device connects each time the device connects to such website, service, or application; and permits the user of the connected device to designate websites, services, or applications to which such signal should not be sent, but does not exempt any website, service, or application from receiving such signal if it is not so designated.”

"I think we should make it compulsory and give it the force of law and give consumers real choice and force the companies to comply. This puts the ball is the consumer’s court."

The Do Not Track Act is an attempt to rectify what has become an epidemic of online tracking and profile-building. Advertisers, website operators, and social media platforms all are heavily invested in monitoring users’ movements around the web, tracking where and when they interact with other sites and content. That tracking allows sites to build profiles of visitors and their interests and further target ads and other content. Those tracking methods and techniques are completely opaque for most people, and the existing mechanisms for opting out or preventing tracking range from mostly useless to pretty effective, but can also affect people’s browsing experience in a major way.

The Do Not Track option that’s built into most browsers today falls on the mostly useless end of the spectrum. Enabling the option sends a signal to sites that the visitor does not want to be tracked, but there is no enforcement for it and site owners have no obligation to respect it. Ad blockers and other similar browser extensions can be quite effective, but they don’t prevent all tracking and can also break certain elements on some sites and makes others nearly unusable.

Hawley’s bill seeks to remedy this situation by establishing the FTC as the enforcement authority and providing monetary penalties for violations. In a hearing of the Senate Judiciary Committee on Monday, Hawley said the bill was necessary to give consumers control over what data they share and whether they’re tracked.

“Google and Facebook are doing something different in this market. They’re not using traditional advertising models. They track us every single day. [The bill] just says that a consumer can make a one time choice to not be tracked. I think we should make it compulsory and give it the force of law and give consumers real choice and force the companies to comply. This puts the ball is the consumer’s court,” Hawley said.

Hawley’s bill is similar to draft legislation written earlier this month by staffers at DuckDuckGo, the privacy focused search engine provider, although the penalties are structured differently.

]]>
<![CDATA[Moody’s Revises Equifax Outlook Post-Breach]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/moody-s-downgrades-equifax-post-breach https://duo.com/decipher/moody-s-downgrades-equifax-post-breach Thu, 23 May 2019 00:00:00 -0400

Data breaches can be costly, both in terms of recovery, lost productivity, and regulatory fines. Moody’s revising its outlook on Equifax proves a breach breach can be detrimental to the company’s financial future.

Credit ratings agency Moody’s revised its outlook on Equifax, from “stable” to “negative,” CNBC reported, citing a recent note from the credit ratings and research service. Investors rely on ratings from services like Moody’s and Standard & Poor’s to determine the trustworthiness of companies and other organizations, as well as the riskiness of such an investment. A lower credit rating means investors will consider that organization as being riskier, and more likely to result in an investment loss.

“We are treating this with more significance because it is the first time that cyber has been a named factor in an outlook change,” Moody’s spokesperson Joe Mielenhausen told CNBC. “This is the first time the fallout from a breach has moved the needle enough to contribute to the change.”

Moody’s rating outlook is an opinion about the direction the organization's rating is headed in the medium term. A negative outlook means there are negative pressures on the organization and that there is a possibility that there will be a downgrade in the credit rating.

The decision to revise the outlook wasn’t just because Equifax had reported a $690 million charge in the first quarter of 2019 for the 2017 data breach which exposed the information of 147.9 million customers. That figure included settling class action lawsuits and potential state and federal regulatory fines. Moody’s noted that the company still needed to make infrastructure improvements to address systemic security weaknesses. While attackers exploited an unpatched vulnerability in Apache Struts on a forgotten server, post-breach analysis found that Equifax had other infrastructure weaknesses and organizational problems that contributed to the breach. Moody’s estimated that Equifax will have security expenses and capital investments of about $400 million in 2019 and 2020, and about $250 million in 2021. Equifax is expected to spend more in infrastructure investments after 2020 than it did prior to 2017.

Moody’s noted that if Equifax will be spending hundreds of millions of dollars on security investments for the next few years, that’s money that is not being invested in new revenue-focused products. Rivals Experian and TransUnion will be able to experiment during this time period and take market share from Equifax.

Moody’s decision to revise the outlook is the first example of a company being held accountable for its security.

While Equifax may be the first company to face scrutiny by ratings services because of its security missteps, it will likely not be the last. Moody’s is in the process of making cyber risk a part of its credit ratings going forward, and plans to hold companies accountable for their security decisions. Calculating risk will let rating services try to predict the long-term fallout of a data breach. Moody’s isn’t the only ratings service taking this step, either. Other ratings services and insurance companies are also figuring out how to calculate an organization’s security risk.

It would be “super interesting” to see what models Moody’s winds up adopting, Rich Mogull, vice-president of product at DisruptOps, said on Twitter. “Done properly they could have a real impact on practices like data collection and retention that are more impactful than mandated security controls.”

"Actions such as the one Moody has taken are designed to deliver a message, and we know that when boards are engaged in cybersecurity risk issues risk management practices improve, sometimes dramatically," said Gary Roboff, a senior advisor to Shared Assessments.

Data-focused companies such as financial and securities firms, hospitals, market infrastructure providers, and electric utilities are among the firms most at risk for being downgraded under the new scheme, CNBC reported.

Boards and CISOs are carefully watching what happens with Equifax since this is the first time that a data breach will affect the company’s ability to attract investors. Up until now, there hasn’t been a lot of impact on companies post-breach. Shoppers return to the retailers, stock prices bounce back (somewhat), and companies move on after they pay their fines. Moody’s decision to revise the outlook is the first example of a company being held accountable for its security, and is a clear indicator that boards and senior executives need to consider security risk as part of its operational assessment.

This is a wake up call, along with pending suits, that cyber governance and best practices are key," said Catherine A. Allen, chairman and CEO of The Santa-Fe Group. "Boards should have robust discussion on cyber practices, appropriate spending, risk or security committees and appropriate oversight.

This story and headline was updated to reflect that Moody's revised its outlook for Equifax and not the credit rating. An explanation of the outlook was also added.

]]>
<![CDATA[Attackers Are Signing Malware With Valid Certificates]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/attackers-are-signing-malware-with-valid-certificates https://duo.com/decipher/attackers-are-signing-malware-with-valid-certificates Wed, 22 May 2019 00:00:00 -0400

There used to be a time when a signed Windows executable file meant it was a file from a legitimate organization and safe to use, and malware was typically unsigned. That assumption is no longer a good one as attackers are increasingly signing their attack tools with valid certificates purchased from well-known certificate authorities.

Thousands of malware samples uploaded to VirusTotal have been signed with a valid certificates from well-known certificate authorities, said researchers from Chronicle. Researchers identified 3,815 signed malware samples (hashes here) which had been uploaded to the scanning service over a one year period leading up to May 7. The certificates came from CAs such as DigiCert, Entrust, GlobalSign, Go Daddy, Sectigo, Symantec, Thawte, and VeriSign.

“Since its inception, the process of cryptographically signing a piece of code was designed to give the Operating System a way to discriminate between legitimate and potentially malicious software,” Chronicle wrote.

More samples are likely, as this is a conservative count. Researchers focused only on Windows portable executable (PE) files, excluded samples that had less than 15 detections, and filtered out any files considered borderline malicious.

This system of trust doesn’t work when malware authors can easily purchase certificates from certificate authorities and their network of resellers to give their code a sign of legitimacy.

"Code signing certificates establish trust for code, apps, containers, updates for everything from mobile phones to laptops to servers to airplanes," said Kevin Bocek, vice president of security strategy and threat intelligence at Venafi. "Code signing certificates are the type of machine identity that made Stuxnet so virulent and successful: they allow attacker to operate with impunity even in the most hardened security environments.

Chronicle found that Sectigo, formerly known as Comodo, had issued the highest number of digital certificates which were used to sign malware over the observed time period, with nearly 2,000 certificates. Thawte and VeriSign came in second and third most popular CA when it comes to signed malware. The top six CAs that signed certificates of 100 or more malware samples accounted for about 78 percent of the signed malware that had been uploaded to VirusTotal, Chronicle said.

Considering that Sectigo is the largest commercial certificate authority, it makes sense that a larger share of the market means that attackers are more likely to use that CA, as well.

Attackers aren’t even bothering to pretend to be an existing company, said Brandon Levene, the head of applied intelligence at Chronicle. Instead of spoofing a well-known company, the attackers are just signing the malware as themselves. “This study has indicated that the majority of samples are just buying Code Signing certs under the guise of quickly created LLCs (or other organizations),” Levene said.

There used to be a time when malware signed with a legitimate certificate was the mark of a sophisticated, nation-state-backed attacker. That is no longer the case, as attackers of all types can obtain valid certificates from certificate authorities just by claiming to be a business.

“Signed payloads are no longer solely within the domain of nation-state threat actors stealing code signing certificates from victims; they are readily accessible to operators of crime focused malware,” the researchers wrote. “Expect to see signed malware reported more frequently.”

There are plenty of opportunities for attackers to misuse code signing certificates, and even the most advanced security tools can be fooled by code signed with valid certificates, Bocek said.

“If there is a single cyberweapon that can inflict maximum damage it is code signing certificates," Bocek said.

CAs are revoking the certificates when they find out they are being abused, but the rate is still pretty low. About 21 percent of samples with abused certificates have been revoked, as of May 8, Chronicle said. It is possible that the number of revoked certificates is actually higher since the data set would reflect the change only if VirusTotal had rescanned the sample after the certificate was revoked. CAs may have their own procedures for monitoring how the certificates are being used, but it is more likely that they are revoking after they receive reports from outside parties about the fraudulent use.

"I have seen times where social media was used to call out CAs on specific malware samples and campaigns but am unsure what monitoring (if any) is in place for CAs to vet software their certs appear in,” Levene said.

Interestingly, Thawte had a higher rate of revoked certificates, as it had revoked 306 certificates, or about 60 percent of the fraudulently used certificates it had issued. Sectigo had revoked 293, or about 17 percent of the fraudulently used certificates.

The ease of getting the code-signing certificates from the CAs directly or through the network of resellers suggests there should be a more stringent process for validating the buyers. Some due diligence before selling the certificate would be better than revoking the certificate afterwards, so that malware signed with that certificate can't cause as much damage.

“While malware abusing trust is not a new phenomenon, the popular trend of financially motivated threat actors buying code signing certificates illuminates the inherent flaws of trust based security,” the researchers wrote.

]]>
<![CDATA[Google Stored Some G Suite Passwords in Plain Text]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/google-stored-some-g-suite-passwords-in-plain-text https://duo.com/decipher/google-stored-some-g-suite-passwords-in-plain-text Wed, 22 May 2019 00:00:00 -0400

For the past 14 years, Google has stored the passwords of customers of its G Suite enterprise products in plaintext on its internal network. The company discovered the problem recently and has notified affected customers and said that there’s no evidence that there was improper access to the passwords.

G Suite is Google’s enterprise productivity offering, and includes Gmail, Drive, and many other apps. The offering is managed by an internal administrator at each customer.

The situation is a result of a mistake the company’s engineers made in 2005 when they were setting up a function for password recovery for enterprise administrators. The system was designed to allow administrators to set up and recover users’ passwords, which in normal circumstances are hashed before they’re stored. But during the implementation of the feature, Google’s engineers mistakenly allowed G Suite users’ passwords to be written to disk in unhashed form. That means that anyone inside Google with access to those servers would have been able to read those passwords.

However, the servers that stored the passwords were not exposed to the Internet, the company said.

“In our enterprise product, G Suite, we had previously provided domain administrators with tools to set and recover passwords because that was a common feature request. The tool (located in the admin console) allowed administrators to upload or manually set user passwords for their company’s users. The intent was to help them with onboarding new users; e.g., a new employee could receive their account information on their first day of work, and for account recovery. The functionality to recover passwords this way no longer exists,” Suzanne Frey, vice president of engineering for cloud trust at Google, said.

“We made an error when implementing this functionality back in 2005: The admin console stored a copy of the unhashed password. This practice did not live up to our standards. To be clear, these passwords remained in our secure encrypted infrastructure. This issue has been fixed and we have seen no evidence of improper access to or misuse of the affected passwords.”

The incident, while embarrassing for Google, likely does not represent much of a current threat to G Suite customers. The plaintext passwords stayed inside Google’s network and weren’t viewable externally, so the main concern would be access by a trusted insider. That’s not a minor concern by any means, but Google has notified all of the affected G Suite customers and required password resets for them. For any customers who haven’t made that change on their own, Google will reset the passwords on its own. Also, Google’s systems don’t rely only on a password for authentication in many cases, especially for G Suite accounts.

“Our authentication systems operate with many layers of defense beyond the password, and we deploy numerous automatic systems that block malicious sign-in attempts even when the attacker knows the password. In addition, we provide G Suite administrators with numerous 2-step verification (2SV) options, including Security Keys, which Google relies upon for its own employee accounts,” Frey said.

]]>
<![CDATA[Firefox Now Blocks Cryptominers and Fingerprinters]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/firefox-now-blocks-cryptominers-and-fingerprinters https://duo.com/decipher/firefox-now-blocks-cryptominers-and-fingerprinters Tue, 21 May 2019 00:00:00 -0400

Mozilla has made several subtle but significant changes to Firefox in the newest release that give people more control over the way the browser handles data in private browsing mode and what content they can block automatically.

The company released Firefox 67 on Tuesday, and among the many changes and improvements are a few tweaks to the privacy and security settings in the browser that are meant to make it easier for users to handle some content. The most significant change is in the way Firefox deals with extensions across windows in private browsing mode. Previously, the browser would automatically enable a given extension across all windows, regardless of whether any of them were running in private mode. But now, that behavior is reversed.

“There are significant changes in Firefox’s behavior and user interface so that users can better see and control which extensions run in private windows. Starting with release 67, any extension that is installed will be, by default, disallowed from running in private windows. The post-install door hanger, shown after an extension has been installed, now includes a checkbox asking the user if the extension should be allowed to run in private windows,” Mike Conca, a product manager for Firefox WebExtensions, said.

“To avoid potentially breaking existing user workflows, extensions that are already installed when a user upgrades from a previous version of Firefox to version 67 will automatically be granted permission to run in private windows. Only newly installed extensions will be excluded from private windows by default and subject to the installation flow described above.”

Private mode allows people to browse without having Firefox retain any history or tracking information about any of the sites visited in that window. Because those windows are meant to be private, having extensions automatically run in them was sort of contradictory to the purpose of private mode. In addition to the change in extension behavior, Firefox 67 also now allows users to save passwords in private mode, a convenience that wasn’t available previously.

Firefox 67 also includes a new feature that allows people to set a preference in the Content Blocking setting to block known cryptominers and fingerprinters. Cryptominers are small programs that run in browsers and use the machine’s resources to mine a cryptocurrency. Some news websites now use cryptominers as a form of micropayment for visitors, a reaction to the advent of ad blockers, and there also are malicious cryptominers that attackers install without users’ consent, usually through drive-by downloads.

And browser fingerprinters are a subset of trackers that allow sites to gather a certain amount of information about a visitor’s browser and device, even without the use of a cooke or other persistent presence on the machine. There’s a broad range of fingerprinters, some more invasive than others.

“One of the three key areas we said we’d tackle was mitigating harmful practices like fingerprinting which builds a digital fingerprint that tracks you across the web, and cryptomining which uses the power of your computer’s CPU to generate cryptocurrency for someone else’s benefit. Based on recent testing of this feature in our pre-release channels last month, today’s Firefox release gives you the option to “flip a switch” in the browser and protect yourself from these nefarious practices,” Mozilla’s Marissa Wood said.

]]>
<![CDATA[Security Basics Prove Highly Effective at Stopping Account Takeovers]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/security-basics-prove-highly-effective-at-stopping-account-takeovers https://duo.com/decipher/security-basics-prove-highly-effective-at-stopping-account-takeovers Mon, 20 May 2019 00:00:00 -0400

Phishing attacks continue to grow in sophistication, as do defenses, but even the weakest account security measures--such as registering a recovery phone number--can prevent the vast majority of such attacks, according to a year-long study by Google and academic researchers.

Google, like many other major account providers, employs a broad and deep set of protections on user accounts, all of which are designed to prevent account-takeover attacks. Those attacks most often take the form of phishing, whether it’s through email, SMS, or another channel, and over time Google has added layer after layer of defense against those attacks. Some of those protections are relatively weak, such as the account holder remembering her alternate email address or answering a challenge question. Others are much stronger, including systems such as two-step verification or the use of a hardware security key, which present much more difficult barriers for attackers.

In cooperation with New York University and the University of California, San Diego, Google conducted a long study of more than 1.2 million accounts and found that just taking the simple step of adding a recovery phone number to an account can prevent 99 percent of mass phishing attacks and 100 percent of bot-based attacks. A phone number is used to determine whether the account holder has access to a trusted device, and can be used for further challenges that block automated attacks.

“If you’ve signed into your phone or set up a recovery phone number, we can provide a similar level of protection to 2-Step Verification via device-based challenges. We found that an SMS code sent to a recovery phone number helped block 100% of automated bots, 96% of bulk phishing attacks, and 76% of targeted attacks,” Kurt Thomas and Angelika Mosicki of Google said.

One of the issues with these kinds of systems, though, is that they can result in legitimate account holders failing to authenticate more often. People forget things, even their own email addresses and passwords, and sometimes may not have their phones close at hand, as hard as that may be to believe. In the event that a user forgets her credentials, those problems can lead to the user being locked out of the account. The Google and university researchers acknowledged this limitation.

“From a practical standpoint, we found that challenges, in conjunction with risk-aware authentication, blocked over 99.99% of automated hijacking attempts and 92% of attacks rooted in phishing at Google. These protections come at a cost of increased failed sign-in attempts from legitimate users, but with eventual success rates at levels similar to password-only authentication,” the researchers said in their paper on account takeovers.

“But unlike many cybercrime threats, users can take simple proactive steps to dramatically increase their security. Users who associate a device with their account can reduce their phishing risk by up to 99%. This approach provides similar levels of protection to two-factor authentication while removing the requirement of always having a device on-hand.”

"Since targeted attackers focus on specific email accounts, they can curate their attacks accordingly to be uniquely effective against those individuals."

The study looked at two separate types of attacks: mass phishing attacks and targeted attacks. Most people are at much higher risk for automated or mass phishing attacks and likely will never see a targeted attack. For the most part, targeted attacks that seek to take over an individual’s account go after a small subset of people, such as executives, diplomats, activists, journalists, and politicians. For those people, higher levels of protection are necessary for preventing account takeovers, and those defenses typically include the use of hardware security keys in combination with the other layered defenses. Defeating targeted attacks is a more difficult task, mainly because attackers who have a small target group can take their time and do reconnaissance on those targets and develop specific tactics for each one.

“Whereas attackers operating at scale expect to extract small amounts of value from each of a large number of accounts, targeted attackers expect to extract large amounts of value from a small number of accounts. This shift in economics in turn drives an entirely different set of operational dynamics. Since targeted attackers focus on specific email accounts, they can curate their attacks accordingly to be uniquely effective against those individuals,” the researchers said.

“Moreover, since such attackers are unconcerned with scale, they can afford to be far nimbler in adapting to and evading the defenses used by a particular target. Indeed, targeted email attacks— including via spear-phishing and malware—have been implicated in a wide variety of high-profile data breaches against government, industry, NGOs and universities alike.”

In their study on targeted attacks, the researchers looked at underground groups that offer hack-for-hire services to break into specific accounts. They interacted with 27 different services and asked them to target honeypot Gmail accounts that Google set up for the study. Each of the victim accounts had an individual website with some personal information, as well. The attackers used a variety of different techniques, but most of them centered on some form of social engineering.

“We confirm that such hack for hire services predominantly rely on social engineering via targeted phishing email messages, though one service attempted to deploy a remote access trojan. The attack- ers customized their phishing lures to incorporate details of our fabricated business entities and associates, which they acquired either by scraping our victim persona’s website or by requesting the details during negotiations with our buyer persona,” the researchers said in their research on hack-for-hire services.

“To bypass two-factor authentication, the most sophisticated attackers redirected our victim personas to a spoofed Google login page that harvested both passwords as well as SMS codes, checking the validity of both in real time. However, we found that two-factor authentication still proved an obstacle: attackers doubled their price upon learning an account had 2FA enabled.”

]]>
<![CDATA[Stack Overflow Updates Breach Advisory With More Details]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/stack-overflow-updates-breach-advisory-with-more-details https://duo.com/decipher/stack-overflow-updates-breach-advisory-with-more-details Mon, 20 May 2019 00:00:00 -0400

Stack Overflow has updated its initial security notice with additional details of the recent breach of its systems—unauthorized access to its production systems. The company’s prompt response to the breach and updates with additional details is a an example of how a company should communicate with users after security incidentst.

Attackers had access to the Stack Overflow’s production systems for nearly one week and some user data was exposed, Stack Overflow’s vice-president of engineering Mary Ferguson wrote in the update. The initial notification on May 16 had said there was no evidence that customer or user data was compromised.

The initial intrusion occurred on May 5 after attackers gained access to stackoverflow.com’s development tier through a bug that had been released in that day’s build and escalated their access level to get to the product environment, Ferguson said. They spent a few days reconnoitering, and then performed an action to gain privileged access to production systems. This allowed them to make privileged requests that allowed them to obtain names, email addresses, and IP addresses of some Stack Exchange users.

“This change was quickly identified and we revoked their access network-wide, began investigating the intrusion, and began taking steps to remediate the intrusion,” Ferguson wrote.

Stack Overflow has separate infrastructure and networks for its Teams, Business and Enterprise products and there is no evidence that any of these systems or their customers have been impacted. Stack Overflow’s Advertising and Talent units also don’t appear to be affected. Roughly 250 users appear to be impacted and have been notified.

The company is looking through its logs for other suspicious activity and taking other “precautionary measures such as cycling secrets, resetting company passwords, and evaluating systems and security levels” in response to the incident, Ferguson said.

Breach Notification Done Well

Some Stack Overflow users praised the company’s prompt announcement and subsequent update. “I think this is one of the best sets of responses to a security incident I've seen,” a user wrote on Hacker News. The user identified two things Stack Overflow did well: disclosing the incident as soon as possible and being straightforward that the company was still investigating and didn’t have all the information yet, and adding more details during the course of the investigation.

“The proactive communication and transparency could have downsides (causing undue panic), but I think these posts have presented a sense that they have it mostly under control...I expect the next (or perhaps the 4th) post will be a fuller post-mortem from after the incident. This series of disclosures has given me more confidence in Stackoverflow than I had before!” the user wrote.

Organizations have to balance speed and thoroughness in breach notifications. Waiting to have all the information before making an announcement opens the organization to accusations of trying to hide the bad news or leaving users in the dark. Notifying users promptly without any details runs the risk of causing panic, overplaying the severity of the incident, or underestimating user impact. It is a tricky line, but the general recommendation is to notify promptly.

Stack Overflow struck the right tone, explaining that the investigation was underway and didn’t try to speculate or guess on the impact. “After we conclude our investigation cycle, we will provide more information,” Ferguson wrote in the initial notification.

In the update, Ferguson outlined the steps taken, such as “conducting an extensive and detailed audit of all logs and databases that we maintain, allowing us to trace the steps and actions that were taken,” and “remediating the original issues that allowed the unauthorized access and escalation, as well as any other potential vectors that we have found during the investigation.”

Elements of a Good Response

An incident response plan is invaluable in the case of a breach because it clearly defines the stakeholders and establishes a course of action. A good incident response plan also defines clear communication channels—how information is shared internally among employees, investigators, and other stakeholders; and who coordinates the communications externally. The plan needs to be clear about what customers will be told, and how. It is extremely easy to botch response.

One of the reasons companies get excoriated after a breach is because of the perception that they were not honest about what happened.

This might sound like the most ridiculously obvious thing to say, but don't lie when disclosing an incident," wrote security researcher Troy Hunt, in a discussion of how organizations should respond to data breaches. "I know the truth may hurt, but the harsh reality of most data breaches is that there's been a failure at some point and now you need to own that.

It’s not enough to just be prompt. Organizations need to be clear about what happened, admit fault if there was a mistake, and accept responsibility. They should provide mitigation details if they know what happened, share information on what they are doing to prevent similar issues, and provide tips on what users can also do.

There are other examples of organizations handling breach response well. Last year, sports apparel maker Under Armour disclosed that an unauthorized party had acquired data associated with user accounts on the company's diet and fitness tracking app MyFitnessPal.

“Under Armour is showing it learned some lessons from companies breached in recent months by notifying its customers rapidly after discovering the intrusion,” Forrester security expert Jeff Pollard said at the time.

Hunt praised two companies, image site Imgur and comment moderation site Disqus, for how they handled their data breaches back in 2017, .

Hunt notified Imgur of stolen data covering 1.7 million user records just before the Thanksgiving holiday in 2017. Imgur made full public disclosure of the breach 25 hours and 10 minutes after Hunt’s initial communication, Hunt wrote on Twitter. Disqus took 23 hours and 42 minutes from when Hunt notified the company of a breach involving email addresses, usernames, sign-up dates, and last login dates for 17.5 million Disqus users to public notification and protectting the accounts, Hunt said on Twitter, calling the response "exemplary".

Disqus “applied urgency,” disclosed right away, provided details, acted quickly to protect impacted accounts by resetting passwords, and apologized to users, Hunt wrote at the time. "This was a dark moment for Disqus and there's no sugar-coating the fact that somehow, somewhere, someone on their end screwed up and they lost control of customer data. But look at the public sentiment after their disclosure; because of the way Disqus handled the situation, it's resoundingly positive."

]]>
<![CDATA[Code Repository Companies Pledge to Share Attack Data]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/code-repository-companies-pledge-to-share-attack-data https://duo.com/decipher/code-repository-companies-pledge-to-share-attack-data Fri, 17 May 2019 00:00:00 -0400

Earlier this month, when unknown attackers wiped repositories and left ransom notes for approximately 1,000 users on popular repository services BitBucket, GitHub, and GitLab, the companies had to act quickly to investigate the origins of the attack and help users recover their data.

The security teams at Atlassian, GitHub, and GitLab shared data on the attacks and information they uncovered on the attackers’ activities as part of their investigation and recovery efforts. The coordination worked so well, the companies have decided to formalize the relationship and continue sharing information, the companies said in their joint post-mortem incident report. One of the things the companies will do is regularly search for files in their users’ repositories containing credentials for other services.

The security and support teams of all three companies have taken and continue to take steps to notify, protect, and help affected users recover from these events," the joint report said. "Further, the security teams of all three companies are also collaborating closely to further investigate these events in the interest of the greater Git community.

The post-mortem report, posted on each of the sites (BitBucket, GitHub, and GitLab), provides detailed results of the companies’ joint analysis that arose from their collaborative investigation.

“Incident responders from each of the three companies began collaborating to protect users, share intelligence, and identify the source of the activity,” the report said.

On May 2, some BitBucket, GitHub, and GitLab users discovered their repositories had been wiped and replaced with a ransom note asking for Bitcoin. The fact that users were across all three platforms raised the possibility of a large operation where the attackers had figured out a way to compromise all the providers. After some investigation, the three providers confirmed independently that the attackers had compromised every single one of the ransomed repositories with legitimate credentials. In some cases, the attackers had the username and passwords; in other cases, they had application passwords, API keys, and personal access tokens.

“After getting access to the user accounts, the attackers performed command-line Git commits, which resulted in overwriting the source code in repositories with the ransom note,” the companies wrote in their analysis.

Exposed Secrets

This attack was different from other types of credential stuffing attacks in that the attackers weren’t reusing credentials stolen and exposed in other breaches. Instead, the collection of passwords and keys came from the repositories themselves, as users had saved (mistakenly?) files containing these secrets to the repositories. The fact that users save files containing API keys and passwords to cloud storage services and code repositories is a known—and big—problem.

Service providers regularly scan repositories for potential problems, but lots of secrets are still exposed. GitHub is currently working on a token scanning feature, currently in beta, to notify other service providers if credentials and other secrets for their platforms are published to public GitHub repositories. GitLab added the ability to look for secrets to its Static Analysis Security Testing tool in GitLab 11.9.

The companies worked together to identify the server that was the source of the attacks, and found a file on that server containing credentials for about a third of the victims. The companies reset or revoked the credentials after finding the file. The companies found that the server’s IP address was continuously scanning for publicly exposed .git/config and other environmental files. Other IP addresses from the same hosting provider were also scanning for exposed files.

The attackers were still systematically scanning the repositories for git configuration files mistakenly containing stored credentials on May 10, more than a week after the ransom incidents first came to light.

Recovery, Prevention

The attackers erased the commit history of the remote repository, but that doesn’t mean users can’t get their repositories back. If the user has the latest copy of the repository locally, then force pushing the local copy would restore the repository. If a local copy isn’t available, then the user will need to close the repository and search through the command history to find the last commit.

Developers should enable multi-factor authentication on their accounts, to prevent this kind of unauthorized access. The post-mortem also emphasized being careful with personal access tokens, as they skip multi-factor authentication. These tokens may have read/write access to repositories and should be considered as valuable as passwords. Tokens should be used as environmental variables and not be hardcoded in the source code.

There are several public-to-private information sharing programs where enterprises share information with the government, as well as industry-based information sharing and analysis centers (ISACs) and industry-focused consortiums and alliances. A few years ago, companies were skeptical about sharing data about attacks, and most of the sharing relied on ad hoc relationships between security teams at different organizations. Organizations understand that sharing threat data can identify incidents sooner, and the additional information is critical for a thorough incident response.

]]>
<![CDATA[Attackers Are Hiding By Tampering With Encrypted Web Traffic]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/attackers-are-hiding-by-tampering-with-encrypted-web-traffic https://duo.com/decipher/attackers-are-hiding-by-tampering-with-encrypted-web-traffic Thu, 16 May 2019 00:00:00 -0400

Attackers don’t want to get caught. Evading detection ensures they can keep working on their goal—such as making money or causing damage. Researchers have seen a recent spike in fingerprints for Transport Layer Security connections, as attackers tamper with Web traffic encryption to make malicious bot activity look like live human traffic.

Called Cipher Stunting, this technique is based on SSL/TLS signature randomization and changes with the “fingerprints” of encrypted Web traffic, Akamai said in its analysis. Where there used to be “tens of thousands” unique fingerprint variants, that number jumped to more than a billion within a span of six months, Akamai researchers said.

The boom in encrypted Web traffic means that attackers are also driving their malicious activity through encrypted connections—Akamai said 82 percent of malicious traffic such as web application attacks, web scraping, and credential abuse, use SSL/TLS. This has led many companies, Akamai included, to fingerprint connections to “differentiate between legitimate clients and impersonators, proxy and shared IP detection, and TLS terminators.”

An encrypted connection begins with an initial handshake request—known as the Client Hello packet—which contains information such as the type of encryption software being used, browser, operating system, and how the encryption package is configured. Akamai creates fingerprints based on the information stored in the Client Hello about the TLS version, the session ID, cipher-suite options, and extensions and compression methods being used.

"The TLS fingerprints that Akamai observed before Cipher Stunting was observed could be counted in the tens of thousands. Soon after the initial observation, that count ballooned to millions, and then recently jumped to billions," said Akamai.

Akamai saw spikes in distinct fingerprints in August 2018 with 18,652 distinct fingerprints globally, and the number had climbed to 255 million by the end of October and more than 1.3 billion instances by February 2019. Several of the fingerprints observed in April covered more than 30 percent of all Internet traffic and were attributed mostly to common browser and operating system TLS client stacks.

The change is on a "scale never seen before by Akamai," the company said.

While researchers initially did not see any attempts to tamper with Client Hello or any other fingerprint component, they started seeing TLS tampering via cipher randomization across several verticals including airlines, banking, and dating websites by September.

While it is possible that the increase in the number of variations could be because of some software changes in the OS, browser, or encryption software, it is even more likely that attackers are randomizing the signatures. Since the set of SSL/TLS stack implementations are relatively small, attackers are submitting a randomized cipher suite list in Client Hello to randomize the resulting hash. This way, a single machine or a network can look like millions of devices.

Akamai observed that the randomization technique was often used in credential-stuffing attacks against login pages, where attackers attempted to use credentials stolen from other sources.

While tweaking SSL/TLS client behavior can be “trivial” for some aspects of fingerprint evasion, Akamai said the difficulty level can ramp up quickly for other types of evasion since the attacker would need to understand how these packages work. Researchers determined “with a high degree of certainty that the cipher stunting has been carried out by a Java-based tool” which could mean that more attackers would be able to start using the techniques as they get their hands on the tool.

“The key lesson here is that criminals will do whatever they can to avoid detection and keep their schemes going,” researchers said.

]]>
<![CDATA[Google Warns of Flaw in Some Titan Security Keys]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/google-warns-of-flaw-in-some-titan-security-keys https://duo.com/decipher/google-warns-of-flaw-in-some-titan-security-keys Thu, 16 May 2019 00:00:00 -0400

Google is warning users of its Titan Bluetooth security keys about a weakness in the way the keys handle pairing with devices, a bug that an attacker could use to impersonate the key or the victim’s device in some highly specific circumstances.

The vulnerability only affects the Bluetooth Low Energy (BLE) Titan keys, and not the USB keys. The Titan keys are small hardware devices, comparable to a YubiKey or Solo, that are used for two-factor authentication for Google accounts. They’re part of the Advanced Protection Program that Google established a few years ago for people who are at a higher risk of being targeted by attackers and want an extra layer of security. The program provides two security keys: usually one BLE key and one USB-C key. The BLE key is meant for authenticating on mobile devices, and Google recently discovered an issue with the way those keys communicate with users’ devices.

“Due to a misconfiguration in the Titan Security Keys’ Bluetooth pairing protocols, it is possible for an attacker who is physically close to you at the moment you use your security key -- within approximately 30 feet -- to (a) communicate with your security key, or (b) communicate with the device to which your key is paired,” Christiaan Brand, a product manager for Google Cloud, said.

In order to exploit the weakness, an attacker would need to be within about 30 feet of a victim and would need to time his attack to coincide with the moment when the victim is pressing the button on her Titan key as part of the authentication flow. And even in that case, the attacker would also need to have the victim’s credentials already.

“An attacker in close physical proximity at that moment in time can potentially connect their own device to your affected security key before your own device connects. In this set of circumstances, the attacker could sign into your account using their own device if the attacker somehow already obtained your username and password and could time these events exactly,” Brand said.

In another scenario, an attacker could use his own device to impersonate the Titan key and possibly take some malicious actions on the victim’s device.

“Once paired, an attacker in close physical proximity to you could use their device to masquerade as your affected security key and connect to your device at the moment you are asked to press the button on your key. After that, they could attempt to change their device to appear as a Bluetooth keyboard or mouse and potentially take actions on your device,” Brand said.

The vulnerability affects the T1 and T2 versions of the Titan keys, and Google is contacting people who have purchased those keys and providing instructions on how to get a free replacement. In the meantime, Google is recommending that people with affected keys continue to use them, given that the attack scenarios are limited and the keys still offer much better protection than a username and password alone.

“This security issue does not affect the primary purpose of security keys, which is to protect you against phishing by a remote attacker. Security keys remain the strongest available protection against phishing; it is still safer to use a key that has this issue, rather than turning off security key-based two-step verification (2SV) on your Google Account or downgrading to less phishing-resistant methods (e.g. SMS codes or prompts sent to your device),” Brand said.

The weakness doesn’t affect NFC keys.

]]>
<![CDATA[Decipher Podcast: Daniel Gruss]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/decipher-podcast-daniel-gruss https://duo.com/decipher/decipher-podcast-daniel-gruss Wed, 15 May 2019 00:00:00 -0400

This week, several separate teams of researchers disclosed new speculative execution attacks against Intel processors. Dennis Fisher spoke with Daniel Gruss of TU Graz in Austria, one of the researchers who developed the Zombieload attack and helped work on some of the others, as well.

]]>
<![CDATA[Intel, Tech Giants Release Updates to Fix New Chip Flaws]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/intel-tech-giants-release-updates-to-fix-new-chip-flaws https://duo.com/decipher/intel-tech-giants-release-updates-to-fix-new-chip-flaws Wed, 15 May 2019 00:00:00 -0400

The discovery of Meltdown and Spectre vulnerabilities in Intel chips last year opened up a whole new class of hardware vulnerabilities and encouraged further research in the processors’ speculative execution feature. Intel and a group of researchers have identified new side-channel attacks targeting weaknesses in speculative execution, similar to Meltdown and Spectre.

Modern processors rely on speculative execution—where the processor tries to guess what the instructions a program may call next and executes those instructions long before the program needs to—in order to boost performance. If the commands have already been carried out and the answer is ready by the time the program needs it, that’s microseconds that was saved. Results from instructions that were part of different paths that wound up being not needed are discarded by the processor.

Named ZombieLoad (CVE-2019-1109, CVE-2018-12130), Rogue In-Flight Data Load (RIDL, CVE-2018-12130), and Fallout (CVE-2018-12126), the attacks use different methods to harvest information—including passwords, website content, disk encryption keys, and user browsing history—stored in memory by applications, operating systems, virtual machines, and trusted execution environments. For example, ZombieLoad can let attackers obtain a user’s browsing history even if the victim had surfed the web from a virtual machine and used Tor. Fallout lets attackers determine the operating system’s memory position, which is valuable information for carrying out other attacks.

The attacks work across security boundaries—including the boundary between user-space/kernel-space, OS processes, virtual machines, and SGX enclaves. The security boundaries are like “the Wall which was supposed to protect the living from the dead,” wrote Kelly Shortridge, vice president of product strategy at Capsule8, referencing the TV show (and book) Game of Thrones. The boundaries keep user data separate from kernel data.

“Just like the Wall stood for one thousand years, these system security boundaries are supposed to be steadfast, so it’s pretty bad when they aren’t,” Shortridge wrote.

To describe how these attacks obtain data, Shortridge used a cat cafe analogy. The "the cats will want to play with toys at some point, but you don’t totally know which toys they will want," Shortridge wrote. Speculative execution is holding toys to give the cats when they look like they are about to pounce, and putting the toys away if they play with a shadow on the wall. If a cat peeks at the hands to see what toys were prepared, that's an attack.

“ZombieLoad 'resurrects' the toys that were just in your hand to see them. RIDL spies on your toys as they’re in your hands. Fallout reconstructs your toys and if it does this a lot, it can pinpoint exactly where the toys were in your hand, too,” Shortridge wrote.

The most likely attack vector involves planting malware on targeted systems as an authenticated local user (with low privileges) to exploit the vulnerabilities, but some of the issues can be triggered remotely using JavaScript code on websites. The fact that these attacks target issues in the processor means it would be hard for security software to detect them, and in many cases, the attacks would not leave behind any traces in the logs.

These attacks have not been observed in the wild.

One thing about timing attacks: they typically take some time to execute, so it's not going to be part of a smash-and-grab attack. In fact, it isn't very likely that these attacks will be used by commodity malware. The attacks, if they come, are more likely to be used in highly targeted operations.

While these new attacks are similar to Meltdown and Spectre in that they target speculative execution, they are different in that they target data stored in various buffers within the processor, not the level 1 cache where memory is stored. The attacks read the Fill Buffers (temporary buffers between CPU caches), Load Ports (temporary buffers used when loading data into registers) and Store Buffers (temporary buffers to hold store addresses and data). These buffers can also hold stale data.

The issues affect many of the Intel chips made within the last ten years, which inclludes most Intel Core and Xeon chips dating back to 2011. Intel said newer chips, including some 8th and 9th generation Core processors and 2nd generation Xeon Scalable processors address these issues on the hardware level. ARM (used in mobile devices) and AMD processors do not appear to be affected.

Different Names, Same Bugs

Intel refers to these issues collectively as Microarchitectural Data Sampling (MDS), and assigned the following names and CVE identifiers: Microarchitectural Store Buffer Data Sampling (MSBDS, CVE-2018-12126), Microarchitectural Load Port Data Sampling (MLPDS, CVE-2018-12127), Microarchitectural Fill Buffer Data Sampling (MFBDS, CVE-2018-12130), and Microarchitectural Data Sampling Uncacheable Memory (MDSUM, CVE-2018-1109). MDS lets programs read data they otherwise would not have access to and the data is leaked by the CPU using a locally executed speculative execution side channel.

“MDS does not, by itself, provide an attacker with a way to choose the data that is leaked,” Intel said. “Practical exploitation of MDS is a very complex undertaking.”

The new attacks have multiple names because they were found by different groups of researchers. Intel said its own researchers and partners first identified the vulnerabilities and were independently reported later by other research groups from the University of Michigan, Worcester Polytechnic Institute in Massachusetts, Graz University of Technology in Austria (one of the original discoverers of Meltdown and Spectre), imec-DistriNet at Katholieke Universiteit te Leuven (KU Leuven) in Belgium, the University of Adelaide in Australia, VU Amsterdam’s VUSec group, Microsoft, Bitdefender, Oracle, and Qihoo 360. It appears Intel started receiving these outside reports in June 2018.

“It’s neat that disparate research teams all stumbled on new types of side channel attacks around the same time,” Shortridge said.

Lots of Updates

Intel asked the researchers to keep the details of the vulnerabilities quiet in order to coordinate fixes with other vendors and also to prepare its own updates. Intel has released its own microcode updates. There are additional opt-in mitigations to disable hyper threading and enable microcode-based mitigations. The mitigations and the microcode updates may impact performance for data center workloads, but should have minimal impact for PCs. Intel claimed that in a Core i9 9900L with Hyper-Threading disabled, the performance hit was as little as 9 percent.

Researchers have noted that the recommended defenses for previously disclosed speculative execution attacks are not sufficient against these new attacks.

While individual PCs and servers are affected, the biggest dangers are for the data centers and large cloud-service providers. Most of the major cloud providers have already taken steps to protect their customers. Amazon Web Services has already deployed protections to all its infrastructures and released updated kernels and microcode packages for Amazon Linux AMI 2018.3 and Amazon Linux 2. IBM is rolling out protections to its cloud services. Google said its cloud infrastructure, G Suite, and Google Cloud Platform products and services are protected. Microsoft has also deployed its own server-side fixes to Azure to mitigate the vulnerabilities.

Microsoft, and Apple have released updates. Microsoft also released a PowerShell script to let users check the status of speculative execution mitigations on their systems. Apple included a Safari update with Mojave 10.14.5 to prevent exploitation from the internet. Google disabled hyper-threading by default in Chrome OS 74 and additional mitigations will be added to Chrome OS 75.

VMWARE said VMware vCenter Server, vSphere ESXi, Workstation, Fusion, vCloud Usage Meter, Identity Manager, vCenter Server, vSphere Data Protection, vSphere Integrated Containers, and vRealize Automation products are affected, and released software updates and patches. Citrix has released a hotfix for XenServer 7.1, which includes hypervisor and CPU microcode updates.

While Oracle SPARC servers and Solaris on SPARC are not affected, Solaris on x86 systems are, Oracle said. Updates are available for Oracle LLinux and VM Server products.

Linux kernel developers released an advisory. Of the Linux distributions, Red Hat Linux, Debian, Ubuntu, and SUSE have started rolling out updates. Updates for Ubuntu LTS will be delivered through Extended Security Maintenance. Ubuntu users should also disable Simultaneous Multi-Threading/Hyper-Threading (SMT) in the BIOS since otherwise the updates won’t work.

Systems running all versions of Xen are affected if they use x86 Intel processors, the Xen Project said in its advisory.

Of the hardware manufacturers, HP and Lenovo has released firmware patches at this time.

]]>
<![CDATA[Microsoft Patches Legacy Windows to Prevent Worms]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/microsoft-patches-legacy-windows-to-prevent-worms https://duo.com/decipher/microsoft-patches-legacy-windows-to-prevent-worms Tue, 14 May 2019 00:00:00 -0400

After Microsoft ended support for Windows 2003 and Windows XP, there weren’t supposed to be any more security updates for those systems. If a vulnerability was found, then it would have to remain unpatched.

Except it turns out some vulnerabilities are too dangerous to leave unfixed, especially since there are production environments with legacy machines running older versions of Windows. Microsoft decided the risks of leaving the [remote code execution flaw in Remote Desktop Services](https://portal.msrc.microsoft.com/en-US/security-guidance/advisory/CVE-2019-0708 (CVE-2019-0708) unpatched was too high and released security updates for Windows 2003 and Windows XP along with updates for still-supported Windows 7, Windows Server 2008 and 2008 R2.

Microsoft recommends patching this issue as soon as possible and issued guidance for unsupported vulnerable systems.

Newer versions of the operating system—Windows 10, Windows 8.1 and 8, Windows Server 2019, Windows Server 2016, Windows Server 2012 and Server 2012 R2—are not affected. Remote Desktop Services, formerly known as Terminal Services, affects older versions of Windows and is not the same as Remote Desktop Protocol.

While there have been no evidence of attackers exploiting this vulnerability in the wild, Microsoft believe an exploit is “highly likely” because the vulnerability can be exploited without authentication and without user interaction. This is the kind of vulnerability that a worm would exploit to propagate on its own from one machine to another.

“The vulnerability is ‘wormable’, meaning that any future malware that exploits this vulnerability could propagate from vulnerable computer to vulnerable computer in a similar way as the WannaCry malware spread across the globe in 2017,” wrote Simon Pope, Director of Incident Response at the Microsoft Security Response Center (MSRC).

Echoes of WannaCry

The initial attack would look something like this: an unauthenticated attacker connects to the target machine via RDP and sends specially crafted requests to gain control over the system. They would be able to install programs; view, edit, and delete data; and create new accounts with full user rights. Once this machine is infected, it would be possible to launch a worm capable of propagating from vulnerable machine to vulnerable machine as the vulnerability is pre-authentication and does not require user interaction.

The security update addresses how Remote Desktop Services handle connection requests.

For organizations that can’t patch right away, Microsoft said enabling Network Level Authentication (NLA) on vulnerable machines act as a partial mitigation. Having NLA enabled means the vulnerability can’t be triggered without authentication. This would stop a worm, since the worm wouldn’t be able to spread on its own. However, if the attacker has valid credentials, then the attacker can still exploit that vulnerability. Many organizations rely on weak passwords for RDP—strong credentials would prevent brute-forcing, but a determined attacker has plenty of ways to steal RDP credentials.

WannaCry was a ransomware cryptomining worm that scanned for vulnerable systems, used the EternalBlue exploit to gain access, and the DoublePulsar tool to install and execute a copy of itself on the new machine. Within a day WannaCry had infected more than 230,000 computers in over 150 countries and brought many organizations to a complete standstill, and disrupted operations at others. Experts believe the spread was primarily through unpatched Windows 7 systems.

With the new vulnerability, the risk is high for industrial facilities, as many of them still have legacy operating systems in their networks. Industrial cybersecurity company CyberX analyzed traffic from over 850 operational technology (OT) networks worldwide and found unsupported versions of Windows—many of which are likely to be affected by this flaw—running in 53 percent of industrial sites.

“The problem stems from the fact that patching computers in industrial control networks is challenging because they often operate 24x7 controlling large-scale physical processes like oil refining and electricity generation,” said Phil Neray, CyberX vice-president of industrial cybersecurity.

]]>
<![CDATA[WhatsApp Flaw Used in Targeted, Not Widespread, Attacks]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/whatsapp-flaw-used-in-targeted-not-widespread-attacks https://duo.com/decipher/whatsapp-flaw-used-in-targeted-not-widespread-attacks Tue, 14 May 2019 00:00:00 -0400

WhatsApp has patched a severe weakness in its software for both iOS and Android that some actors have been using in highly targeted attacks with an exploit that requires no interaction from the victim.

It’s not clear exactly how long the vulnerability has been present, but researchers at WhatsApp recently discovered it and set about it fixing it. But some attackers had been exploiting the vulnerability an unknown amount of time, targeting a small number of victims, according to reports. The attacks against the vulnerability were part of attempts to install a spyware tool on victims’ phones, reportedly the Pegasus software sold by Israeli firm NSO Group. That firm sells its system to law enforcement and intelligence agencies and the Pegasus spyware has been linked to compromises of devices owned by journalists, dissidents, and human rights activists in a number of countries.

The vulnerability itself can be exploited without any actions from the victim, and reports say that the known exploit attempts have used voice calls to the victim’s device as the exploit vector. Victims do not need to answer the call in order for the exploit to work.

“A buffer overflow vulnerability in WhatsApp VOIP stack allowed remote code execution via specially crafted series of SRTCP packets sent to a target phone number,” the advisory from Facebook, which owns WhatsApp, says.

There are fixed versions of WhatsApp available for iOS and Android, and security experts are recommending that users install the fix as soon as possible. The vulnerability is quite serious, and it is the kind that is used in targeted attacks by advanced actors, typically intelligence agencies and law enforcement organizations. The victims in such attacks often have no indication that their devices have been compromised and the kind of powerful spyware tools used in these operations typically give operators the ability to monitor voice, text, and other apps remotely.

Platforms such as WhatsApp, Signal, iMessage, and others that offer secure, encrypted messaging or voice communications have become prime targets for high-level attackers. Many people rely on these platforms for secure, private communications, including reporters, dissidents, abuse victims, and others. Criminal groups and terror organizations are also known to use such tools, so vulnerabilities in those platforms are extremely valuable for law enforcement agencies seeking to keep tabs on suspects.

Those vulnerabilities are also highly prized because of their scarcity. Apple’s iOS is considered one of the more difficult platforms to exploit and Signal and WhatsApp vulnerabilities are valuable, as well. Zerodium, a firm that buys vulnerabilities from researchers, pays up to $1 million for remote code execution bugs in iMessage and WhatsApp and up to $500,000 for such flaws in Signal.

"Quite frankly, we are on the losing side of a disheartening asymmetry of capabilities that favors attackers over us, defenders.”

“Unfortunately, so called ‘0-click’ exploits are more common than it appears on the press, and blaming WhatsApp for this security flaw is shortsighted, as we can surely expect competitor apps to be equally targeted and most likely already exploited,” Claudio Guarnieri, a security researcher who has tracked surveillance technology makers closely and now works for Amnesty International, said in a newsletter article Tuesday.

Human rights organizations and other groups have been highly critical of the use of these tools to target journalists, activists, and others, as well as of the software makers themselves. Regulation of the sale and export of advanced surveillance tools varies widely by country, and while there are several well-known sellers of these systems, there are many more that haven’t yet attracted widespread media or research attention.

Security researchers track the sellers of these systems closely, but one of their challenges in this work is discovering compromises and infected devices. While the number of known victims of these operations is usually quite small, the population of unknown victims is more worrisome to researchers.

“The amount of documented cases of targeting of journalists and human rights defenders using NSO Group's products and services is evergrowing. And although we expect more to come to light in the future, all that we know so far is most likely a small fraction of the whole. Attacking and infecting mobile devices is a difficult, but not impossible, task because of the many security mitigations and lockdowns baked into mobile platforms, such as Android and even more so iOS. However, these security controls have made mobile devices extremely difficult to inspect, especially remotely, and particularly for those of us working in human rights organizations lacking access to adequate forensics technology,” Guarnieri said.

“Because of this, we are rarely able to confirm infections of those who we even already suspect being targeted. Last August, for example, we discovered one of our Amnesty staff members was targeted with Pegasus but whether others were too is not possible for us to confirm. Quite frankly, we are on the losing side of a disheartening asymmetry of capabilities that favors attackers over us, defenders.”

CC By 2.0 image from Tim Reckman.

]]>
<![CDATA[Decipher Podcast: Alex Pinto]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/decipher-podcast-alex-pinto https://duo.com/decipher/decipher-podcast-alex-pinto Mon, 13 May 2019 00:00:00 -0400

The publication of the Verizon Data Breach Investigations Report is an important event every year for the infosec community, and the 2019 version includes analysis of data from more than 41,000 incidents and more than 2,000 actual breaches. Dennis Fisher talks with Alex Pinto of Verizon Enterprise about the trends in this year's report, how the data is collected, synthesized and analyzed, and what surprises the report holds.

You can read our piece on the DBIR here.

]]>
<![CDATA[Digging Deep into the Verizon DBIR]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/digging-deep-into-the-verizon-dbir https://duo.com/decipher/digging-deep-into-the-verizon-dbir Mon, 13 May 2019 00:00:00 -0400

Now that the initial flurry of excitement around the data breach statistics has died down, enterprise defenders should revisit Verizon's 2019 Data Breach Investigations Report and take a deeper look at the charts and data analysis to understand the threats they face. The 2019 DBIR gives enterprise defenders the details about how attackers actually behave in real-world attacks.

Verizon’s DBIR is a combination of information about actual breaches and incidents the Verizon Enterprise Solutions team investigated in 2018, as well as information voluntarily provided by 73 partners. The twelfth edition of the report analyzed 41,686 security incidents, including 2,013 confirmed breaches, from 86 countries. Verizon defines a breach as an incident that results in confirmed disclosure or exposure of data. The distinction is important, since a distributed denial of service attack would be an incident but not a breach because data was not (usually) exposed. The report helps enterprises better understand what attackers are doing and are most likely to attack next.

“This [report] is going to be useful for defenders, CISOs, security leaders on the defense side, to figure out if they are spending time, money, and resources, in the right direction,” said longtime security practitioner Adrian Sanabria. “They can pore over the report and say, ‘Oh, no, we poured in way too much money on this thing,’ and adjust accordingly.”

What We Knew

The DBIR validated a lot of the things security professionals already knew—such as the fact that the majority of attacks are carried out by outsiders. Of the data breaches Verizon investigated in 2018, 69 percent were external attacks. A little more than half of the breaches, 52 percent, involved some kind of compromise (or hacking, as the report calls it), followed by 33 percent which involved social engineering. About 28 percent of the breaches used malware.

Passwords continue to be the weak spot for enterprises, as 32 percent of the breaches involved phishing and 29 percent involved the use of stolen credentials. Web application attacks continue to be the most common attack vector for data breaches, but this category includes both compromising the application through some kind of vulnerability and attackers simply logging in with stolen credentials. The report said over half of breaches associated with web application attacks involved attackers using stolen credentials.

Even though there have been a lot of indictments and research discoveries uncovering espionage operations, the overwhelming majority of attacks—71 percent of data breaches—are still because the perpetrators are trying to make money. One of the upsides of financial institutions in the United States switching to EMV payment cards (chip-and-sign in the United States, chip-and-PIN elsewhere) is that card-present fraud involving compromised point-of-sale terminals and card skimming operations fell by 57 percent. But, as expected, attackers have shifted to card-not-present fraud by targeting web applications to capture payment card details. In some cases, card-not-present fraud is now more extensive than card-present fraud, the report found.

Need to Understand Paths

The DBIR introduced “attack paths” in the report, which attempts to document the steps the attackers used in an incident or breach. The attack path shows all the different ways attackers took getting from the initial compromise to finding information to achieving their goal at the final point. Attackers also don’t plan out a long campaign with many steps if they don’t have to. Attack paths are much more likely to be short than long, the report found. “[Defenders] fail to stop short paths substantially more often than long paths,” the report found. “Short attacks work.”

“Think: Cyber Kill Chain, but from a Threat Intel perspective,” wrote Lance Spitzner, the director of SANS Security Awareness at the SANS Institute.

While hacking—such as exploiting software vulnerabilities—was the most common first action in an incident, “it could be almost anything,” the report said, including social engineering or human error. However, malware was frequently used in the middle and at the end of the line—attackers are still relying heavily on malware to carry out their attacks.

“We got really good at stopping malware from getting in, so they use other paths to get in, turn off antivirus, and then use malware anyway,” Sanabria said.

This section of the report used golf as the metaphor. “Malware is usually not the driver you use to get off the tee; remember that most is delivered via social or hacking actions...[Malware] may not be the opening shot, but it is the trusty 7-iron (or 3 wood, pick your analogy according to your skills), that is your go-to club for those middle action shots...While social attacks are significant for starting and continuing attacks...they’re rarely the three-foot putt followed by the tip of the visor to the sunburned gallery.”

The report’s authors described how the course creator builds sand traps and water hazards along the course to make the game challenging for the players, and used this metaphor as a way to show that enterprises rely on defenses and mitigations to deter, detect, and defend their networks from attacks. The problem with this metaphor, is that it overemphasizes how many enterprises are putting in these kind of defensive layers, or even the amount or roadblocks the defenders are putting in. The report makes it sound like “most enterprises have sand traps or water hazards, when they do not,” Sanabria said.

Even so, the section on attack paths is the “most valuable part of this year’s report,” Sanabria said. “It’s the first time I’ve seen a report break out the steps this way.”

Information security talks a lot about the kill chain, but is rarely mapped against the actual breach. Most data breach conversations talk about the initial point of compromise, but that is only one single point in time and leaves out details on how the rest of the attack unfolded. Take Equifax for example—the company was initially skewered over not patching the Struts vulnerability in the affected server, but the detailed post-mortem report afterwards revealed how complex the situation was, and highlighted the challenges organizations have with legacy infrastructure.

When defenders have just a “snapshot of an attacker’s process,” they have to guess what the attackers did before and after that point in time. Those guesses have an impact on how the defenses are put together, the Center for Internet Security wrote in the report.

“Defending against malware takes a different approach if the malware is dropped via social engineering, a drive-by download, or brought in by an insider via a USB device,” CIS said.

Post-mortems are valuable for defenders to learn from other people’s mistakes, but are in-depth ones are generally difficult to find. Seeing some of that analysis in the report will help defenders understand how to prioritize some of their defenses.

What We Need to Know

The data confirmed that attackers are not interested in doing more work than they have to do to reach their goal. Attackers are increasingly targeting corporate executives, and business-to-email compromise attacks are on the rise, the report found. The DBIR grouped CEO fraud and other attacks against corporate executives as Financially Motivated Social Engineering Attacks (FMSE). The end goal of these phishing attacks aren’t to get the victims to click on links or open malicious attachments, but rather to transfer money to the attackers.

“For the criminal it’s all about bang for your buck,” Sanabria said. “Sending an email and saying, ‘Please direct your money here,’ is pretty easy.”

Executives are six times more likely to be a target of a social engineering campaign than they were just a year ago, and C-level executives are 12 times more likely to be targeted. Committing tax fraud with stolen W2 forms have practically disappeared. Senior executives are trying to review and act on emails quickly, so suspicious emails are more likely to slip through.

DBIR also provides hard data on which assets the attackers are going after, giving defenders insight into what to protect, Sanabria said. While workstations have long been targeted, web applications and mail servers are increasingly being targeted as well. Attacks trying to steal credentials most often involved cloud-based mail servers and there was an uptick in attacks trying to compromise victim email accounts, the report found. Breaches with compromised payment cards were increasingly the result of compromised web servers.

How attackers use malware has completely changed in the last five years, Sanabria said. Flash-based malware used to be the most prevalent and was a surefire way to infect people, but as browsers got more secure, Flash-based malware is no longer that common. Email remains the most common way to distribute malware, but as the attach paths section showed, enterprises have to also think about cases where the attackers gain a foothold through other means and then directly install the malware.

Ransomware was the second most prevalent malware affecting all industries, but the researchers noted that ransomware isn’t treated the same across industries. Healthcare organizations are mandated by HIPAA regulations to report ransomware infections as data breaches because data is involved—but that isn’t the case for other industries. Ransomware as a whole accounted for 24 percent of data breaches in Verizon’s data set, but when looking only at healthcare data, ransomware jumps to 70 percent of data breaches, according to the DBIR. That suggests that the number of infections are being undercounted, and that the threat ransomware poses is underestimated.

What We Didn't Know

Because DBIR is based on actual investigations, its findings help debunk some of the myths that persist in information security, Sanabria said. The report makes it clear that about a third of the breaches—34 percent—involved internal employees, but most of these breaches were not the result of someone acting maliciously. Misconfigured cloud-based file storage accounted for 21 percent of data breaches caused by errors.

“[While] the rogue admin planting logic bombs and other mayhem makes for a good story, the presence of insiders is most often in the form of errors,” the report said. “Please, close those buckets!”

Errors include misconfiguring servers to allow unauthorized access, publishing data to a server where it should not be visible to everyone, and emailing sensitive files to the wrong person.

“Just teaching people to double check the TO: address in their email draft before hitting the “send” button could reduce almost 10% of all breaches globally,” Spitzner wrote.

Insider attacks were more prevalent in health care than external attacks. In medical data breaches where an internal employee was involved, 14 percent were more likely to be a medical professional such as a doctor or nurse.

What May Be Hype

While there has been a surge in the number of cryptomining attacks, cryptomining malware did not break the ten most popular forms of malware. There were 39 cases in Verizon’s data set, which is “more than zero, but still far fewer than the almost 500 ransomware cases this year,” the report said. “The number in this year’s data set do not support the hype.”

One reason why cryptomining may not be showing up that much in Verizon’s data sets is that it may not be getting reported as often. Cryptomining is using resources so it is more annoying, rather than malicious in the same sense as data theft, Sanabria said. Many organizations may not even notice unless the malware was on a cloud server and it generated an expensive bill.

The idea that most attacks in the manufacturing sector are for espionage is widely accepted, except the data doesn’t bear that out. “For the second year in a row, financially motivated attacks outnumber cyber-espionage as the main reason for breaches in Manufacturing, and this year by a more significant percentage (40% difference),” the report said. However, the team stopped short of saying that espionage is no longer a problem in manufacturing (or education, another popular vertical) and noted that it could just be a bias in the data that was collected.

We used to say that a fraction of companies needed to worry about nation-state attackers. It looks like we can’t say that anymore.

The report also paints an interesting picture of the state of defense today. The assumption in the past was that only a small number of companies who had to worry about nation-state attacks needed nation-state-caliber defenses. The report showed that nation-state attacks are on the rise across all sectors, because the attackers are going to go after any target that has information that may be valuable for them. Even small businesses are now targets if they have information that someone wants. For example, nation-state attackers may go after a restaurant's surveillance camera if some high-level political figures regularly eat at that restaurant.

“We used to say that a fraction of companies needed to worry about nation-state attackers. It looks like we can’t say that anymore,” Sanabria said. “Maybe everyone has to worry about state stuff—but that will be a hard bar for the SMB to clear.”

“The purpose of this study is not to rub salt in the wounds of information security, but to contribute to the ‘light’ that raises awareness and provides the ability to learn from the past,” Verizon’s research team wrote in the report. “Use it as another arrow in your quiver to win hearts, minds, and security budget.

]]>
<![CDATA[Deciphering Swordfish and Three Days of the Condor]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/deciphering-swordfish-and-three-days-of-the-condor https://duo.com/decipher/deciphering-swordfish-and-three-days-of-the-condor Fri, 10 May 2019 00:00:00 -0400

There are good movies, there are bad movies, and then there's Swordfish, a movie that exists in a world beyond your world. It has everything: John Travolta, Halle Berry, guns, an incoherent plot, 128-bit DES encryption, a multi-headed worm. Dennis Fisher, Zoe Lindsey, and Pete Baker break it all down and then we mercifully move on to Three Days of the Condor, a classic of the 1970s paranoia genre and early techno-thriller with an all-time great Robert Redford performance.

]]>
<![CDATA[FTC Pushes For Federal Privacy Law]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/ftc-pushes-for-federal-privacy-law https://duo.com/decipher/ftc-pushes-for-federal-privacy-law Thu, 09 May 2019 00:00:00 -0400

With the number of privacy and data-breach bills on Capitol Hill increasing almost daily, the Federal Trade Commission is making a play to become the lead federal enforcement agency for any new law that goes on the books.

Right now, the FTC has some authority to enforce privacy regulations, but it all flows from one section of the Federal Trade Commission Act. That section applies to unfair and deceptive trade practices and the FTC has used it many times over the years to bring cases against companies that violated regulations on health care, financial, and other types of privacy. But in July, FTC Chairman Joseph Simons told the House Energy and Commerce Committee that the commission needed more resources and authority to take action against companies that run afoul of privacy regulations.

“Privacy and data security will continue to be an enforcement priority at the Commission, and it will use every tool at its disposal to redress consumer harm. Many of the FTC’s investigations and cases involve complex facts and well-financed defendants, often requiring outside experts, which can be costly. It is critical that the FTC have sufficient resources to support its investigative and litigation needs, including expert work, particularly as demands for enforcement in this area continue to grow,” Simons said.

Section 5 has broad language, but it has some exceptions, as well. For example, it doesn’t give the FTC the power to impose civil penalties. There are other limitations, as well, and on Wednesday Simons and several other members of the commission testified in front of the same House committee and encouraged Congress to pass federal privacy legislation and give the FTC the power to enforce it.

“[Section 5] also excludes non-profits and common carriers from the Commission’s authority, even when the acts or practices of these market participants have serious implications for consumer privacy and data security. To better equip the Commission to meet its statutory mission to protect consumers, we urge Congress to enact privacy and data security legislation, enforceable by the FTC, which grants the agency civil penalty authority, targeted APA rulemaking authority, and jurisdiction over non-profits and common carriers,” the FTC’s prepared testimony says.

The FTC’s advocacy for federal privacy and security legislation is not new nor is it unique. Privacy focused organizations and security experts have been urging Congress to pass broad federal legislation in this area for many years but things haven’t moved in that direction. There is some support for the idea on Capitol Hill, and in fact Frank Pallone (D-NJ), chairman of the Energy and Commerce Committee, said in his opening statement Wednesday that the committee plans to move on it soon and that the FTC should have the authority to not only enforce existing regulations, but to help stop violations ahead of time.

“The FTC also needs more authority to prevent privacy abuses from happening in the first place and to ensure that companies properly secure the personal data entrusted to them,” Pallone said.

“Congress must pass strong, comprehensive privacy legislation, and this Committee will take action. The legislation should give consumers control over their personal data, including giving consumers the ability to access, correct, and delete their personal information. And it should shift the burden to companies to ensure they only use the information consistent with reasonable consumer expectations.”

The FTC’s push for more authority is at odds with the sentiment from many in the privacy community, who believe there should be a separate agency with authority over privacy. Last week, leaders from the Electronic Privacy Information Center sent a letter to the Senate Committee on Commerce, Science and Transportation urging the creation of an independent agency.

“Given the enormity of the challenge, the United States would be best served to do what other countries have done and create a dedicated data protection agency. An independent agency could more effectively utilize its resources to police the current widespread exploitation of consumers’ personal information and would be staffed with personnel who possess the requisite expertise to regulate the field of data security,” the letter from EPIC President Marc Rotenberg and Policy Director Caitriona Fitzgerald says.

Another key consideration is how any new privacy legislation would apply to federal agencies themselves and the data they collect and store on citizens. Most of the myriad draft bills circulating in Washington right now focus on penalties for private companies that violate regulations and make no mention of federal agencies.

]]>
<![CDATA[Google Wants to Change How Cookies Are Used]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/google-wants-to-change-how-cookies-are-used https://duo.com/decipher/google-wants-to-change-how-cookies-are-used Thu, 09 May 2019 00:00:00 -0400

Google has to walk a fine line between collecting as much information as possible to deliver personalized user experience and knowing so much about the individual user that it becomes creepy. The balancing act doesn’t always work, but that hasn’t stopped the company’s engineers from tinkering with new privacy-focused features.

Google I/O, the company’s developer conference, is a good place to announce new privacy features “coming soon” in Android Q, smartphones, and other Google services. It’s also a good time to bury changes in how the company will handle the way web browsing currently works because there’s so many other things going on. In particular, Google announced changes to how it will handle HTTP cookies in Chrome.

Cookies are multi-purpose, as they can be used to tell websites that the user is a repeat visitor to the site, let the user turn on the “remember me” option when logging in, keep items in the shopping cart even after the user navigates away from the page, serve up personalized ads and content based on past behavior, and track the user from site to site. Most browsers now prompt before allowing a site to store cookies, but for the most part, browsers can’t tell the difference between different types of cookies. If the browser knew how to distinguish between a cookie that keeps the user logged in and a cookie that track the web activity, privacy-conscious users can keep clearing out the “bad” cookies (tracking) without giving up the “good” cookies (logged in).

“Unfortunately, to browsers, all of these different types of cookies look the same, which makes it difficult to tell how each cookie is being used — limiting the usefulness of cookie controls,” Ben Galbraith director of Chrome product management and Justin Schuh, director of Chrome Engineering.

What’s needed, are cookies that can tell browsers their purpose, and controls that are designed to handle specific types of cookies.

“Blunt solutions that block all cookies can significantly degrade the simple web experience that you know today, while heuristic-based approaches—where the browser guesses at a cookie's purpose—make the web unpredictable for developers,” Galbraith and Schuh said.

Later this year, Google will add new features to Chrome that will provide transparency on how sites are using cookies. The first step, before making changes in Chrome, is modifying how the cookies work. To do that, Google will require website developers to assign a cookie attribute specifying what the cookie will be used for. Developers would have to say in the attribute whether it will work across websites. The mechanism will be built on the web’s SameSite cookie attribute.

“This change will enable users to clear all such cookies while leaving single domain cookies unaffected, preserving user logins and settings,” the engineers wrote. “It will also enable browsers to provide clear information about which sites are setting these cookies so that users can make informed choices about their data.”

The security benefit is also clear, since cookies won’t be able to be abused in cross-site injection and data disclosure attacks, or cross-site request forgery (CSRF). A CSRF attack tricks the user’s browser into submitting the cookie to the target server, even if the request wasn’t generated from the site. With the same-site setting on the cookie, the target server won’t be tricked since the cookie will only be sent if the request originated from the same site or domain. Google also plans to eventually make cross-site cookies function only over HTTPS connections.

]]>