<![CDATA[Decipher]]> https://decipher.sc Decipher is an independent editorial site that takes a practical approach to covering information security. Through news analysis and in-depth features, Decipher explores the impact of the latest risks and provides informative and educational material for readers curious about how security affects our world. Mon, 18 Mar 2019 00:00:00 -0400 en-us info@decipher.sc (Amy Vazquez) Copyright 2019 3600 <![CDATA[Android Q Steps Up Location Privacy]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/android-q-steps-up-location-privacy https://duo.com/decipher/android-q-steps-up-location-privacy Mon, 18 Mar 2019 00:00:00 -0400

Google is making a number of changes to the way that Android handles location permissions for apps, giving people more options for restricting apps’ usage of location data and making it more difficult for apps to get access to the WiFi, phone, and other APIs.

The changes are coming in the next version of Google’s mobile OS, called Android Q. Although the final version isn’t due for release until August, Google pushed out a beta release of Android Q last week and among the many changes are several modifications to the app permissions as they relate to user privacy and location. The biggest difference has to do with the way that people can allow or deny permission for a specific app to access location information. In current versions of Android, when an app requests access to location data, the user can only allow or deny that request. In Android Q, the user will have the ability to grant conditional access.

“One thing that's particularly sensitive is apps' access to location while the app is not in use (in the background). Android Q enables users to give apps permission to see their location never, only when the app is in use (running), or all the time (when in the background),” Dave Burke, vice president of engineering at Google, said in a post on the Android Q beta release.

“For example, an app asking for a user's location for food delivery makes sense and the user may want to grant it the ability to do that. But since the app may not need location outside of when it's currently in use, the user may not want to grant that access. Android Q now offers this greater level of control.”

Location data can be highly sensitive for many people and can be used to track an individual’s current location and historical travels. Many apps require access to location services in order to operate correctly, but others request access for reasons that are less clear. The change in Android Q allows device owners to grant and remove access to location data on a conditional basis, something that iOS already has.

Android Q also brings a change to the permissions required for an app to scan for wireless and Bluetooth connections. Now, apps will need to have higher privileges in order to perform some of those tasks.

“Most of our APIs for scanning networks already require COARSE location permission, but in Android Q, for Bluetooth, Cellular and Wi-Fi, we're increasing the protection around those APIs by requiring the FINE location permission instead,” Burke said.

Google has been emphasizing the privacy and security features of Android of late, and seems to be placing even more importance on those properties in Android Q. Since the early days of the iPhone, Apple has positioned it as the most secure mobile device on the market and has played up the exploit mitigations, attack resistance, and privacy enhancing features of iOS. Apple has a vertically integrated ecosystem that includes its own software, purpose-built hardware, and an app store model that requires owners to get apps from the official App Store.

The Android ecosystem is a much different beast, with many custom versions of the OS, dozens of device manufacturers, and an app model that allows owners to install software from third-party app stores. That model gives owners more freedom and flexibility, but it also comes with security trade-offs, as those third-party stores obviously aren’t managed by Google and so their apps don’t go through Google’s rigorous security review process. That review system is a significant hurdle for attackers trying to get malicious apps onto users’ devices, so they tend to avoid it if possible.

Another change Google is implementing in Android Q involves the way the OS handles storage for individual apps. If device owners are using external storage, such as removable cards, Android will assign each app its own sandbox on that medium.

“Android Q gives each app an isolated storage sandbox into an external storage device, such as /sdcard. No other app can directly access your app's sandboxed files. Because files are private to your app, you no longer need any permissions to access and save your own files within external storage. This change makes it easier to maintain the privacy of users' files and helps reduce the number of permissions that your app needs,” the Google developer notes for Android Q say.

]]>
<![CDATA[IoT Security Bills Use Federal Spending as Leverage]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/iot-security-bills-use-federal-spending-as-leverage https://duo.com/decipher/iot-security-bills-use-federal-spending-as-leverage Fri, 15 Mar 2019 00:00:00 -0400

A new bill that would establish federal guidelines for the security of IoT devices as well as policies for the coordinated disclosure of vulnerabilities in those devices has been introduced in both the House of Representatives and the Senate, setting the stage for what would be the first set of such standards for federal agencies and the vendors who sell them gear.

The bill includes a number of separate provisions, but the one that stands to have the biggest potential effect on IoT security is the establishment of a set of standards for security in connected devices, standards that will be developed by the National Institute of Standards and Technology. The draft legislation doesn’t set out too many specifics for what those security standards would be, but dictates they will include four separate areas: secure development, identity management, patching, and configuration management. Under the language in the bill, vendors selling IoT devices to federal agencies will have to meet the NIST standards for those areas.

The bill, known as the Internet of Things Cybersecurity Improvement Act, would also require the director of NIST to develop “recommendations for the Federal Government on the appropriate use and management by the Federal Government of Internet of Things devices owned or controlled by the Federal Government, including minimum information security requirements for managing cybersecurity risks associated with such devices.”

The weak security of many IoT devices has been a prime topic in both the security community and among legislators for several years, but there hasn’t been much real improvement. Many of the same problems that plagued early generations of IoT devices are still present in more recent versions, including default hardcoded credentials, weak software security practices, a lack of update mechanisms, and many others. The House and Senate bills attempt to address some of these problems through the proposed requirements for secure software development practices and patching, but the specific language in the NIST guidelines will be vital in actually determining whether the standards have any effect.

The UK last year published a set of guidelines on secure development practices for IoT device manufacturers that includes many of the same principles discussed in the IoT bills introduced this week. Manufacturers of IoT devices have been slow to respond to calls from researchers, consumers, and lawmakers to improve the security of their products for a number of reasons, mostly because there’s little if any economic incentive to do so. Connected light bulbs, running shoes, beds, and doorbells are selling just fine as is.

Everything from our national security to the personal information of American citizens could be vulnerable because of security holes in these devices."

But, the best incentive that the government has to get things moving in the right direction is its unmatched buying power. If these bills become law, the guidelines developed by NIST could become the standards used in acquisition programs, and there is no greater incentive to clean up a security mess than money.

“As the government continues to purchase and use more and more internet-connected devices, we must ensure that these devices are secure. Everything from our national security to the personal information of American citizens could be vulnerable because of security holes in these devices,” said Rep. Robin Kelly (D-Ill.), one of the sponsors of the House bill.

The second major part of the proposed legislation is the establishment of a coordinated vulnerability disclosure policy for federal agencies using IoT devices. The policy is supposed to be aligned with ISO 29147 and ISO 30111, two international standards that address vulnerability disclosure. The Senate version of the bill requires that the director of NIST “in consultation with such cybersecurity researchers and private-sector industry experts as the Director considers appropriate, publish guidance on policies and procedures for the reporting, coordinating, publishing, and receiving of information about—(1) a security vulnerability relating to a covered device used by the Federal Government; and (2) the resolution of such security vulnerability.”

Vulnerability disclosure in the IoT market has followed the same general path as it did in the early days of desktop software and web applications. Some vendors have reacted to vulnerability reports with hostility or legal threats, others have ignored them, and some have worked with researchers to remediate the problems. The responses have been all over the map, with no consistent set of guidelines for researchers and vendors to follow. The bills on Capitol Hill now would go a good distance toward addressing a large part of that problem. They also would require federal agencies to comply with a set of guidelines on reporting, coordinating, publishing, and receiving information from researchers about vulnerabilities in IoT devices.

The Senate bill is sponsored by Sen. Mark Warner (D-Va.) and several others, and the House bill is sponsored by Kelly and Rep. Will Hurd (R-Texas).

]]>
<![CDATA[Senators Ask For Transparency on Attacks on Senate Computers]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/senators-ask-for-transparency-on-attacks-on-senate-computers https://duo.com/decipher/senators-ask-for-transparency-on-attacks-on-senate-computers Thu, 14 Mar 2019 00:00:00 -0400

In response to past attacks on Senate staff and in preparation for the 2020 election season, two senators have asked the Senate Sergeant at Arms to inform members of the intelligence committee within five days of the discovery of any compromise of a Senate committee and also to provide annual reports to every senator on the total number of breaches of Senate machines.

Senators, Senate staff, and campaign staff have been frequent targets of attacks in the last decade, often in attacks attributed to foreign actors. Some of these attacks have resulted in serious compromises, while others have been fairly minor. But the thing that they all have in common is that there’s no specific mechanism in place for senators and members of their staff to be notified about a new attack. Many industries have some kind of centralized information-sharing clearinghouse that collects and distributes data on current attacks and vulnerabilities in that specific vertical.

But there’s no real method for this to happen inside the federal legislature. Sens. Ron Wyden (D-Ore) and Tom Cotton (R-AR) would like that to change and on Wednesday sent a letter to the Senate Sergeant at Arms, who is responsible for both the physical and information security of the members of the Senate and their staffs, requesting that he provide regular updates on breaches.

“During the last decade, hackers have successfully infiltrated U.S. government agencies including the Office of Personnel Management, health care firms such as Anthem, and technology giants like Google. Hackers continue to target all manner of government entities, and there is little doubt that Congress is squarely in their sights,” the letter says.

“Indeed, as your predecessor testified before the U.S. Senate Committee on Appropriations in June 2017, ‘the Senate is considered a prime target for cybersecurity breaches.’ The Sergeant at Arms must be transparent in providing members of the Senate all information about the possible existence and scale of successful hacks against the Senate.”

Wyden and Cotton ask Sergeant at Arms Michael Stenger to provide two annual updates to members of the Senate: the aggregate number of Senate computers that have been compromised, and aggregate number of other incidents in which attackers have gotten access to sensitive Senate data. The letter also asks that Stenger’s office “Commit to a policy of informing Senate leadership and all of the members of the Senate Committees on Rules and Intelligence, within 5 days of discovery, of any breach of a Senate computer.”

This is the second time in the last few months that Wyden, who focuses quite often on privacy and security issues, has asked something similar of Stenger’s office. In September 2018, Wyden sent a letter to several Senate leaders, asking them to allow the SAA’s office to provide cybersecurity services to Senate staffers and members for their personal devices. Wyden also asked the Federal Election Commission if he could use surplus campaign funds to help secure personal devices, which the FEC approved in December.

“Yes, you may use campaign funds to pay for cybersecurity protection for your personal devices and accounts. Such expenses fall within the uses defined as permissible under the Act: ordinary and necessary expenses incurred in connection with the duties of the individual as a holder of federal office,” the FEC’s answer said.

]]>
<![CDATA[Deciphering Mission Impossible]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/deciphering-mission-impossible https://duo.com/decipher/deciphering-mission-impossible Wed, 13 Mar 2019 00:00:00 -0400

The first Mission: Impossible film gave us so many wonderful gifts: goofy Usenet searches, Apple PowerBook action shots, CIA mainframe hacking, and some great mid-career Tom Cruise running. Before the series turned into a high-stakes, high-budget action franchise, the original film was a fun, sometimes goofy heist story with a vein of technobabble running through it. This is Deciphering Mission: Impossible.

]]>
<![CDATA[New Strains of PoS Malware Continue to Rise]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/new-strains-of-pos-malware-continue-to-rise https://duo.com/decipher/new-strains-of-pos-malware-continue-to-rise Wed, 13 Mar 2019 00:00:00 -0400

Malware specifically designed to reside on point-of-sale systems and steal card data has been a key tool in the arsenals of cybercriminals for many years and PoS malware has been linked to some of the larger data breaches in history. Researchers recently have seen two new strains of PoS malware in use, one of which includes a domain-generation algorithm (DGA) to evade detection and another that is linked to the operator of a separate botnet.

PoS malware comes in all shapes and sizes and there are dozens of different kinds for sale on underground forums and in private transactions. Researchers at Flashpoint have discovered that some attackers recently have been using the DMSniff PoS malware to steal card data from small businesses in both the restaurant and entertainment industries. DMSniff has been in use for a couple of years, at least, but has been sold in private transactions until recently. The Flashpoint researchers say that attackers using the malware likely are compromising target PoS devices through either brute-force SSH attacks or exploiting a known vulnerability.

To help evade detection, DMSniff uses a DGA to generate a number of command-and-control domains quickly, domains that the malware can then use to communicate with the outside world once it’s on a new network.

“The DGA is based on a number of hardcoded values; in the samples researchers have found, the first two characters of the generated domains are hardcoded in the bot. Researchers have found 11 variants of this DGA so far, all structured in the same algorithm, but with variable first two letters and hardcoded multiply values in the algorithm,” Flashpoint’s Joshua Platt and Jason Reaves wrote in an analysis of DMSniff.

“The bot loops through the domain generation while rotating through a list of top-level domains (TLDs)— e.g .in, .ru, .net, .org, .com—until it finds a server it can talk to. The data that was harvested by the bot to create a hostid is then sent off inside the user-agent.”

Botnets and other types of malware have used DGAs for many years, but the technique isn’t nearly as common in PoS malware.

Researchers at Cisco’s Talos Group also came across a new piece of PoS malware, called GlitchPOS, which appears to be connected to a malware author who has been selling other kinds of malware in the past. GlitchPOS is sold on malware forums, and like DMSniff and other PoS malware, it’s designed to steal card data on infected devices before the data is encrypted. The threat actor who is selling GlitchPOS and appears to have created it, also has been seen selling the older DiamondFox malware, which had some PoS capabilities, too.

“In 2017, the DiamondFox malware included a POS plugin. We decided to check if this module was the same as GlitchPOS, but it is not. For DiamondFox, the author decided to use the leaked code of BlackPOS to build the credit card grabber. On GlitchPOS, the author developed its own code to perform this task and did not use the previously leaked code,” Warren Mercer and Paul Rascagneres of Talos said in their analysis of the new malware.

PoS malware has evolved quite a bit over the years, but the basics have remained: infecting PoS devices and stealing card data. Its use has been a remarkably effective tactic for many cybercrime groups, and that’s likely to remain the case for some time.

]]>
<![CDATA['People Have a Right to Free Speech, But a Bot Doesn't']]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/people-have-a-right-to-free-speech-but-a-bot-doesn-t https://duo.com/decipher/people-have-a-right-to-free-speech-but-a-bot-doesn-t Tue, 12 Mar 2019 00:00:00 -0400

SAN FRANCISCO--Since the last presidential election, awareness levels and concern about misinformation and propaganda campaigns has been increasing steadily, especially now as the next campaign draws closer. But information manipulation is not just a seasonal problem, and it can affect a wide range of organizations, enterprises, and individuals.

Much of the discussion of misinformation, disinformation and manipulation campaigns since the 2016 election has been focused on how Russian groups and individuals used social media platforms to influence sentiment. Both Facebook and Twitter have come under scrutiny from government agencies and outside critics for how they handled foreign influence operations on their platforms and whether they were too slow to respond. And both companies have made changes to the way that they identify and remove inauthentic accounts and content since then, but that is certainly not the end of the story. Bad actors change their tactics and adapt as the situation warrants.

“I think what we see is it’s very clear that different threat actors work across different platforms,” Nathaniel Gleicher, head of cyber security at Facebook, said during a panel discussion on the weaponization of the Internet during the RSA Conference here.

“The way you make progress in security is you find a way to impose more friction on the bad actors without imposing it on users. That’s an incredibly big focus that we have right now.”

Facebook, even more than Twitter, has taken much of the heat for not being quick enough to identify and eliminate inauthentic content and misinformation campaigns on its platform. The criticism focuses on the company’s executives and security team being caught off-guard by the scope and sophistication of the misinformation and manipulation campaigns. But countering these kinds of campaigns isn’t always a simple process, especially when there are multiple avenues of influence and manipulation in play. In the case of the 2016 election, there may have been too much focus on one avenue in particular.

“We were looking in the wrong place. We were looking for people hacking Facebook client accounts, and not buying ads at scale that more than half of the American population saw,” said Peter Warren Singer, a strategist at New America.

It’s not just the platform operators that have taken a hit over foreign influence operations; United States intelligence agencies also have come under criticism for not seeing what was happening earlier. But Rob Joyce, a senior adviser at the National Security Agency, said the intelligence community was aware of what foreign organizations were up to, but were not looking inward, at Facebook and Twitter.

“I don’t think we missed it. The intelligence community has looked at Russian manipulation and influence operations long before there was a cyberspace."

“I don’t think we missed it. The intelligence community has looked at Russian manipulation and influence operations long before there was a cyberspace. There is an understanding of the tradecraft and techniques We’ve watched them advance and move online and it’s through that observation that we’ve been pretty successful,” Joyce said. "Now that we understand what's going on, what do we do about it? That's the challenge.

“There were efforts, but like all of us on platforms and in government and in civil society, we’re trying to shape and react when we’re in the middle of speech. And that’s a pretty difficult place for America to go.”

One of the main outcomes of all the discussions, hearings, and arguments about disinformation and influence operations is a call for more regulation of and transparency into social media platforms’ operations. That idea necessarily involves the issue of free speech and the attendant concept of anonymity online. Both are sensitive issues and involve difficult conversations for government agencies as well as the platform providers.

“We think a lot about authenticity and not necessarily just anonymity. There are very good reasons why people don’t want to describe every single detail about themselves,” Facebook’s Gleicher said.

The NSA’s Joyce agreed, emphasizing the challenges that taking away anonymity can present.

“It’s actually dangerous in some countries. I can’t imagine the department of truth or ministry of truth setting up in our government,” he said. “I do believe people have a right to free speech, but a bot doesn’t. Where we can take away the inauthentic voices of bots, I think we should do that.”

]]>
<![CDATA[Decipher Podcast: RSA 2019]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/decipher-podcast-rsa-2019 https://duo.com/decipher/decipher-podcast-rsa-2019 Mon, 11 Mar 2019 00:00:00 -0400

Dennis Fisher sits down with Fahmida Rashid, Mike Mimoso, and Jessy Irwin at the RSA Conference in San Francisco to talk about the major themes of the conference.

]]>
<![CDATA[Tech Giants Weigh In on U.S. Federal Data Privacy Law]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/tech-giants-weigh-in-on-u-s-federal-data-privacy-law https://duo.com/decipher/tech-giants-weigh-in-on-u-s-federal-data-privacy-law Mon, 11 Mar 2019 00:00:00 -0400

SAN FRANCISCO——There are signs Congress will tackle privacy legislation again this year, and technology companies such as Google have a keen interest in shaping the federal privacy law. While there are several points of disagreement on what the law should cover, interest is high on both sides of the aisle in Congress to do something on the federal level to protect consumers, said a panel of policy executives from Google, Microsoft, and Twitter at RSA Conference.

The likelihood of a federal privacy law passing in the next year is higher than in years past—Julie Brill, corporate vice president and deputy general counsel at Microsoft, optimistically pegged the odds at 30 percent—and the time is ripe for this discussion.

The European Union’s General Data Protection Regulation (GDPR) went into effect a little less than a year ago, and companies in the United States with European users have been overhauling their policies to ensure they were in line with the new stringent data privacy requirements. The California Consumer Privacy Act (CCPA), which would give California residents significant control over their data, is set to go into effect on January 2020.

Recent incidents—including Cambridge Analytica collecting information of millions of Facebook users and the massive Equifax breach where personal data for millions of Americans were stolen—clearly illustrated the lack of protections for consumers on the federal level. The House Energy and Commerce Committee and the Senate Judiciary Committee have held hearings and the Federal Trade Commission scheduled privacy hearings for April.

There wasn’t this much interest among lawmakers and industry groups two years ago. The fact that the Chamber of Commerce released model privacy legislation calling for a federal privacy law last month was a “sea change,” Brill said.

“It’s no longer a question of if there will be a privacy bill, but what that bill would look like,” Brill said.

Core Elements

Most technology companies agree that a federal law governing the collection and use of consumers’ data is essential. Brill said the federal law needs to include three important elements: Users should have a strong control over what data is being collected; Companies should be transparent in their data collection and usage; A strong enforcement mechanism should be in place to hold companies accountable.

The disagreement lies in the details, such as whether companies should start data collection after the user has given permission to do so (opt-in) or stop data collection after the use rescinds permission (opt-out). Consumers should know what kind of data is being collected but the question is whether companies should have to disclose every piece of data they’ve collected on a person, or if they can just list categories of data. And the list goes on.

With CCPA, the tech companies lobbied hard against giving individuals the right to sue companies for privacy violations. It would be interesting to see if this provision makes it to the federal law.

GDPR or Not GDPR?

The U.S. Government Accountability Office recommended Congress develop internet privacy legislation similar to GDPR to enhance consumer protections in a report released mid-February.

"Recent developments regarding Internet privacy suggest that this is an appropriate time for Congress to consider comprehensive Internet privacy legislation," GAO said in the report.

The U.S. law does not need to be as prescriptive as GDPR, said Sarah Holland, public policy manager at Google. A better approach would be a “risk-based/outcome-based framework” that defines the overall requirements or objective and let the businesses figure out the appropriate processes, Holland said. The law should allow users to decide how much privacy controls they want to exercise, so some would take greater controls and others would be lax. This would be very different from GDPR.

“There are a lot of laws on the books already and apply to a comprehensive baseline federal legislation,” Holland said.

Holland repeated concerns that regulations would stifle innovation, as complying with strict rules would be onerous for small businesses. Nithan Sannappa, associate legal director of product at Twitter, agreed with Holland, noting that GDPR’s right-to-access provisions require companies to let users know what data they have on them. When Netflex was asked to provide a user’s Bandersnatch viewing log history, it was able to do so, but most small businesses don’t have the resources to build systems and processes needed to provide that level of granular information upon request, Sannappa said.

“Any federal regulation should make careful consideration of the benefits and burdens and the tradeoffs between the two,” Sannappa said.

There is some self-interest here for Google, as well. Any kind of data privacy law would potentially affect at least 50 products at Google. “We want to make sure that we can meet user requirements for functionality as well as control and privacy,” Holland said.

Sannappa said it was important to balance the “harms we’re trying to prevent” and the “benefits regulation will enable.” Brill noted that the U.S. and Europe discuss harm differently as the Europeans view privacy as a fundamental right.

“Thinking about privacy as a right will start orienting U.S. businesses towards what’s happening around the world and may create a true paradigm shift,” Brill said.

GDPR Around the World

GDPR’s stringent requirements have paved the way for other countries, and “over the next five to 10 years, you can see that standards in Europe will be operable in a great deal of the world,” Microsoft's Brill said.

Under GDPR’s adequacy requirement, data about European users can be transferred only to “a market compliant with European standards,” so countries are beginning to pass laws that align with the EU regulations. Brazil has passed legislation and India and South Korea are considering proposals. The United States will also have to do the same thing with its privacy law, as well. The U.S. will need to include user control over data; accountability and transparency in how companies are using data; and strong enforcement in its version of the law, Brill said, noting that all three are components of GDPR and two are in CCPA.

However, Brill cautioned that “it would be difficult to translate GDPR” in its entirety.

State vs Federal

Pro-business groups such as the Chamber of Commerce and tech companies would like to see a federal law that is more industry-friendly than CCPA. Privacy advocates worry that being industry-friendly means the federal law would be weaker than CCPA, and that the federal law would preempt California, removing those strong consumer protections.

Preempting CCPA won’t be easy, especially since California has 53 members in Congress. Aside from that, there is a sense among lawmakers that discussion should focus on what kind of protections are needed and not on how states are trying to protect consumers.

"We're not going to get 60 votes for anything and replace a progressive California law, however flawed you may think it is, with a non-progressive federal law," Sen. Brian Schatz (D-Hawaii) said during a recent hearing of the Senate Commerce Committee where executives from Amazon, Apple, AT&T, Google, Twitter, and Charter Communications testified.

Schatz introduced the Data Care Act of 2018 to require companies to “reasonably secure” identifying information and promise not to use it in harmful ways. Users would be notified in case of a data breach and third-parties with access to the data would also have to adhere to the same standard. The bill also expanded the FTC’s enforcement powers.

“A federal law needs to be worthy of preemption,” said Brill. “It needs to be a strong federal law. That conversation should be at the end, not the beginning.”

Existing Privacy Laws

If the U.S. law decides to model itself on GDPR or CCPA, there is a clear definition of what constitutes personal information. GDPR defines personal information as anything that relates to an identified or identifiable person either directly or indirectly. CCPA goes even further, covering the individual and data belonging to the household. Personal information under CCPA also includes inferences that can be drawn from mining different data sets.

GDPR and CCPA overlap on the major points to give consumers some control over the data collected by online companies: The right to know what is being collected; To access such data; To delete, correct, or erase data; To carry one’s data from one company to another.

CCPA also gives the consumer the right to opt out of one’s information being sold to other entities.

While Congress has been talking, and talking, and talking about what needs to go into the privacy law, states have been moving. Bills similar to CCPA are being considered in 11 state legislatures and some state agencies are weighing developing privacy rules for specific industries. Washington state is considering a law that would give consumers new rights and impose restrictions on companies using personal data for profiling and facial recognition.

“We’ve worked with the states on their laws – working with legislators to make improvements,” Microsoft’s Brill said. “We feel there needs to be something on the books because we need to engender trust with consumers–we recognize the moment that we’re in, and know we need to address it.”

Brill compared the current situation to what happened with the data breach notification laws. Absent a federal breach notification law, states enacted their own, with California enacting the most comprehensive one that acted as a model for several other states.

“If it weren’t for the states, we would know so much less about what’s happening with breaches. There would be a lot less information to go on. That has been important and it happened at the state level, starting with California but almost every other state followed,” Brill said.

]]>
<![CDATA[FBI Intensifies Its Focus on Cybercrime]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/fbi-intensifies-its-focus-on-cybercrime https://duo.com/decipher/fbi-intensifies-its-focus-on-cybercrime Fri, 08 Mar 2019 00:00:00 -0500

SAN FRANCISCO--The FBI handles a broad range of criminal threats, but addressing the threat to critical infrastructure and organizations from cyber attackers has become one of the bureau’s top priorities, occupying much of the FBI’s time and resources, the bureau’s director says.

“The diversity of the cyber threat we face right now is unlike anything that we’ve faced in our lifetimes,” Christopher A. Wray, the director of the FBI, said during a keynote talk at the RSA Conference here this week.

“The range of attackers and attacks is unprecedented.”

Wray, a former Assistant United States Attorney and Assistant Attorney General, took over as FBI director in August 2017 after the departure of James Comey. He said that cyber attacks present a unique challenge for law enforcement agencies, because the scope of both the attacks and the attack groups is so large. Cyber attacks comprise a wide range of individual techniques, and different groups have their own individual motivations, targets, and goals. The FBI, as the country’s top law enforcement agency, is tasked with investigating not just everyday cybercrime, but also more sophisticated nation-state attacks and operations by foreign intelligence services. This is no small challenge.

“We’re dealing with foreign intelligence services, other nation state-affiliated groups, cybercrime groups, all of it,” Wray said.

Even for an organization with the manpower, resources, and investigative experience of the FBI, defending against and investigating this broad range of attacks and adversaries is difficult. Wray said the FBI relies heavily on cooperation with private sector organizations for help with threat intelligence and other information.

“The reality is, we couldn’t do what we do without the private sector,” he said.

There are many former FBI cybercrime investigators in the private sector now, and there have been both formal and informal information-sharing programs involving the FBI and security vendors for many years. One of the limitations of those programs in the past has been that much of the information flow went one way: from the private companies to the FBI. This is partly due to the nature of criminal investigations, which prevents law enforcement from being able to share certain kinds of information.

“The reality is, we couldn’t do what we do without the private sector."

But recently, there have been a number of examples of the FBI informing organizations about active attacks or penetrations of their networks. Just today, Citrix, the virtualization software vendor, announced that the bureau had alerted the company to a possible compromise of the Citrix internal network. Citrix said it has started a forensic investigation into the attack, which the FBI alerted the company to on March 6.

“While our investigation is ongoing, based on what we know to date, it appears that the hackers may have accessed and downloaded business documents. The specific documents that may have been accessed, however, are currently unknown. At this time, there is no indication that the security of any Citrix product or service was compromised,” Stan Black, the CISO of Citrix, wrote in a post.

“While not confirmed, the FBI has advised that the hackers likely used a tactic known as password spraying, a technique that exploits weak passwords. Once they gained a foothold with limited access, they worked to circumvent additional layers of security.”

]]>
<![CDATA[Improve Risk Perception, Get Better Decisions]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/improve-risk-perception-get-better-decisions https://duo.com/decipher/improve-risk-perception-get-better-decisions Fri, 08 Mar 2019 00:00:00 -0500

SAN FRANCISCO——It is a trope within the security industry that humans are “awful” at risk management and make poor choices, when in fact, humans are rather good at making decisions, Andy Ellis, CSO of Akamai Technologies, told attendees at RSA Conference.

A stakeholder from the business side approaching the security team rarely looks forward to the conversation. Business owners expect to be held to an “impossible standard,” and the security teams tend to focus on the “horrible risks” being taken and react with “Why are you doing this?” Neither side is thinking about the reasons for the other side’s behavior, but immediately finding fault because they don’t understand what circumstances led up to the decision.

“In their mind, security is the bad guy. We are the people whose goal is to tell them how ugly their project is, and all the poor choices they made, and how we don’t think they should be employed at this company anymore,” Ellis said. “We’re not inclined to work together. We’re telling the story where they are the villain. In fact, they’re telling the opposite story, where we’re the villain.”

The decision made by the business may seem incomprehensible to the security professional—but it also presents an opportunity for the security team to learn why the person made that particular choice.

Ellis used the “OODA loop” decision-making model developed by United States Air Force Colonel John Boyd to lay out his case for why humans were “awesome”—not perfect, but pretty good—at risk management. The model frames decision-making as a recurring cycle of observing what is happening, orienting or filtering the raw information through past experiences and cultural values, deciding what to do next, and acting on that decision.

Context Matters

Observing involves paying attention to the world, but also looking at the myriad of inputs and picking which ones are important. (“Is someone shooting at me or is it a sunny day?”) Framing the information in light of what is happening helps to make sense of all the different inputs. If the person is giving a talk on a stage, then information about the number of pedestrians outside will not be relevant, but that same piece of information will be very important if the person is driving down the street.

“This is a challenge we have, that it is hard for us to put ourselves in the mode of our counterparts when we are engaging in anything, let alone complex conversations about risk,” Ellis said.

Organizations have “historical paranoia,” where the focus is on not doing something that previously got someone else in trouble, without explaining why. In fact, if anyone asks for the reason, the question is dismissed. For example, many organizations have a security policy requiring passwords to be changed every 90 days. It made sense to do so when it took about 90 days to crack a password. Nowadays, passwords can be cracked even sooner, or have already been stolen through other means. Even though 90 days is no longer helpful, organizations persist in following this policy because that is what security teams are used to.

Another example is writing down passwords and putting it in a password vault. It is good advice, but it’s the “exact opposite” of what security professionals tell people, Ellis said. The context the security professionals have is often the wrong context for the world people live in, and the disconnect is one of the reasons why one side can’t understand the decisions made by the other side.

“We collected a list of everything that ever got anyone in trouble, and say, ‘Don’t do those things,’ but we don’t understand why we say that anymore,” Ellis said.

Assessing Risk

Some risks are more straightforward to understand than others. Showing up to hear Ellis speak was a risk, in case he was a terrible speaker, but the impact would be 50 minutes lost. Falling asleep during team meeting could result in getting fired. For more complex situations, the benefits and trade-offs become obscure and the risks are harder to understand.

In the 1970s, the risk of buying a Ford Pinto was clear: the gasoline tank was in the back so in a rear-end collision, the Pinto’s gasoline tank would be damaged (and leak). Fast forward forty years later, and cars are now networks of computers that drive themselves around. Pushing on the accelerator is no longer a mechanical operation, but one that kicks off multiple computer programs. A computer has many things that can go wrong and fixing it is harder than fixing a mechanical problem. The stunt hacking showed that attackers could take over a Jeep Cherokee, the mindshift—of thinking of cars as computers—has to happen first in order to really understand the dangers of driving a Jeep Cherokee.

“I can’t really explain, at the level I explained the Ford Pinto, what the bad design choice was [for the Jeep Cherokee],” Ellis said.

People adjust how much they are willing to lose or spend based on the situation. In a game where the player is asked to place a bet and then guess a number that would be rolled on a 20-sided die, the likelihood of winning is the same regardless of the size of the bet placed. Logically, if a person is willing to play for $1, that person should be willing to play for a million dollars, but that ignores cost context. As people’s perception of risk went up, they reduced the impact of that risk.

Nearly everyone in the room indicated with raised hands they would play this game if the initial bets were $1 and $10. People started dropping out of the game when the bets went up to $100, and by $1,000, the majority of the people had stopped playing, and only one person held out till $100,000. People value something by what they have to give up.

“$1 is change for me. $10 starting to be interesting, that is a drink. $100, that is a nice dinner at RSA [Conference]. $1,000, that is interesting money. I am totally out,” Ellis said.

An organization has to consider the cost context in its risk management discussions. A product manager will not react well to the idea of pushing out a product’s release date to address some risks because they consider those dates fixed and don’t dare alter them. A security leader could instead suggest moving the feature to the next cycle to buy time to address the risks without impacting the original release date.

“When [the decision] is in their cost context, it’s really invaluable,” Ellis said.

Understanding Decisions

Understanding what people pay attention to helps in understanding how people view risk and why they made the decisions they made. The human brain is constantly deprioritizing information that doesn’t seem unusual or directly relevant because otherwise, there is just too much sensory data. to deal with. Drivers see pedestrians while driving but won’t remember details about the ones that didn’t run into the street because they weren’t important to the current situation. People tend to pay attention to things that are new and not something that happened last year, or things that affect their “tribe.” People also remember “surprising” things that “feel true.” Problems that are “far away”—timewise, geographically, or social group—tend to get dropped.

Understanding the decision-making loop is important because adversaries are doing the same thing, observing what the organization is doing and modifying their actions to inject conflicting information or hiding their activities. Organizations can look at their loops the way adversaries do to find areas of improvement. A key step is to reduce the amount of information coming in—if the organization is paying attention to things it isn’t acting on, then it shouldn’t be paying attention to those things, Ellis said. Along with better instrumentation, organizations need to review their models to make sure they have accounted for potential traps in how they frame the data. Once a decision is made, the organization has to check the assumptions to make sure the outcome makes sense. Finally, organizations need to make a plan and practice that plan so that everyone knows exactly what needs to be done when something happens. That may take the form of table-top exercises or training.

People make decisions based on what they paid attention to. Shaping the way they perceive risk, by taking into account their models and things they fear, influences the end result—the decision and the actions taken.

“Humans are situationally awesome at risk management,” Ellis said.

]]>
<![CDATA[Security Education: Running With Scissors]]> info@decipher.sc (Dave Lewis) https://duo.com/decipher/security-education-running-with-scissors https://duo.com/decipher/security-education-running-with-scissors Thu, 07 Mar 2019 00:00:00 -0500

Childhood is a formative time when we are exploring the world and everything seems new. The lessons we learn as children help us gather information for use later in life: the difference between right and wrong, for example. We learn that it is not wise to tug on Superman’s cape, not to spit into the wind avoid pulling the mask off masked stranger and avoid people named James. (Hat tip to Jim Croce).

An important lesson from those early years is something we all take for granted as adults: Don’t run with scissors. When you are young, the ramifications of such actions never really hits the mark, and either through parental teaching or the school of hard knocks we all figure out this one at some point. The point is that we learned the value of this exercise.

So, why do we as security practitioners all too often throw our arms up in despair when the subject of security education inevitably bubbles to the surface? This is one of the difficulties inherent in a maturing industry. There is a fascination to tackle the fun parts of the job but an adverse reaction to the heavy lifting that is necessary to keep the greats moving. If we don’t spread the security message to all aspects of society we are introducing exposures that attackers can leverage to their own nefarious ends.

I was once asked during an interview how I would hack into a certain company. I said, “Oh that’s easy.” They laughed at my reply and asked me to expand. I said, “I’d take the CEO’s admin to lunch.” Based on the response I knew I had rattled their way of looking at the problem they had presented. Attackers will come through your front door until you build a better front door. Then they’ll try the windows and it spirals from there. It is never safe to assume that an attacker will stop because of a certain control that you’ve installed.

The reality is that security is in fact everyone’s job. But let’s be honest with ourselves and ask the hard question: What does that actually mean? Further to that end what does it mean to the rank and file in your organization as it pertains to their daily job? How does your company approach security education?

Most organizations have mandatory security education when an employee starts as a new hire. Did your security education stay with you? Did your training change your behaviors? Have you tried injecting humor into the lessons? There is some basis for my comment.

From the American Psychological Association article, “How laughing leads to learning" in 2006 :

However, a growing body of research suggests that, when used effectively, classroom comedy can improve student performance by reducing anxiety, boosting participation and increasing students' motivation to focus on the material. Moreover, the benefits might not be limited to students: Research suggests that students rate professors who make learning fun significantly higher than others.

You want security education program to be effective. If humor will help with retention it’s something worth considering.

Repetition is often cited as the element to get the message to stick. Marketing folks have their rule of seven that says that a prospect needs to see an advertisement seven times before they make a purchasing decision to buy said product or service. We can make use of this messaging idea to get the security message across to the wider audience. You want your audience to take your training to heart so finding a way to consistently reinforce the message is key. Being able to do this without driving people up a wall is a longer discussion.

When I was young I played a lot of sports. The winning team would get the big trophy. The second place team would get a smaller trophy and the remaining teams would get the participant ribbon. I always loathed those. Nowadays when a lot of organizations deliver security training they allow the employee to print out the certificate of completion. This has always struck me as the redux of that participant badge.

What if you took the training, added in the humor and hammered the message home repeatedly? You would have the makings of a fun and engaging training program that staff want to take part in. Creating an award based system is a great example. Let’s be clear that it doesn’t have to be a monetary reward. It can be a recognition email to peers or a positive email to that person’s direct manager while cc’ing them. Something to provide that positive feedback loop.

The adage that security is everyone’s job is true and not only for the staff. This also applies to the program that is rolled out to deliver the wider message. One can’t simply run a SCORM compliant training program and tick a box that says you’re done. This requires a well thought out program to strengthen security awareness training and help us all learn to stop running with scissors.

]]>
<![CDATA['Who Knows What These Computers Are Doing?']]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/who-knows-what-these-computers-are-doing https://duo.com/decipher/who-knows-what-these-computers-are-doing Tue, 05 Mar 2019 00:00:00 -0500

SAN FRANCISCO--Technology advances and evolves at a frighteningly fast rate, which is great for users, but the pace of change makes it ever more difficult for security technology to keep up.

Security is difficult to get right, and that challenge is is made more daunting when the systems and devices change constantly. The task of figuring out how to defend a given system grows more complex by the day, something that even some of the pioneers of the security community struggle with.

“The most trustworthy computer I’ve ever owned had two floppy drives. When you were done with it, you powered it down and you could be reasonably sure that nothing foreign happened to it,” Paul Kocher, a cryptographer who helped develop the idea of differential power analysis attacks on cryptosystems, said during the cryptographers’ panel at the RSA Conference here Tuesday.

The same thing certainly can’t be said about today’s computing devices. Modern devices are rarely shut down completely and are subject to an ever-widening array of attacks, many of which were not even contemplated by the designers of software and hardware from just a couple decades ago. Attacks always get better, and while computing devices and security have improved as well, it hasn’t been an even race. Many of the attacks that are prevalent today take advantage of the complexity of target systems, and complexity is usually the enemy of security.

“Thirty years ago, we had computers that we knew how they worked. That’s not true now. Who knows what any of these computers are doing?” said Whitfield Diffie, one of the pioneers of public-key cryptography.

“I don’t think Australia can do better than the NSA, so I don’t think this is going to end very well for any of us.”

Part of the problem, the panelists said, is that modern computing relies so much on interconnected systems distributed across the globe. Those systems are often owned and operated by people or organizations with which a given individual has no actual connection or relationship. That requires the individual to trust both the system and the operator of it, a requirement that isn’t really ideal for security.

“Trust isn’t necessarily the right word to use. It implies that I believe something that I haven’t actually verified for myself,” Kocher said. “We can never actually have complete trust in somebody across the internet whose objectives might be unknowable.”

During the panel, which also included Ron Rivest, one of the designers of the RSA algorithm, and Shafi Goldwasser from the Sminos Institute for the Theory of Computing, the cryptographers also talked quite a bit about the push in various countries for backdoor access to encrypted communications and devices. There is legislation in both the UK and Australia that includes a version of law enforcement access to encrypted communications, either through technical or judicial means, and officials from the FBI and other agencies in the United States have been pushing for a similar thing.

But security experts in general and cryptographers specifically say any back door in an encrypted system, regardless of whether it’s for law enforcement use, not only weakens the system but also provides another target for attackers. There have been a handful of cases over the years of back doors being found in cryptosystems, and intelligence agencies are known to have exploited some of them, at least. Kocher said the idea of using legal means to force companies to weaken their own products is counterproductive.

“I think if anyone should be going to prison, it’s the developers who put back doors in their products without telling their managers or anyone else,” he said. “I don’t think Australia can do better than the NSA, so I don’t think this is going to end very well for any of us.”

]]>
<![CDATA[Don't Despair, Good Privacy Days Ahead]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/do-not-despair-good-privacy-days-ahead https://duo.com/decipher/do-not-despair-good-privacy-days-ahead Mon, 04 Mar 2019 00:00:00 -0500

SAN FRANCISCO——While it is “really easy to be nihilistic” about the current state of privacy, there is also plenty to be pleased about, such as the almost-year-old European privacy law and the fact that companies are beginning to compete on privacy, Jon Callas, a technology fellow at the American Civil Liberties Union, said in his keynote at the CSA Summit.

“The good news is the privacy situation has gotten so bad that people want to change it,” Callas said. “That means that over the next five to 10 to 20 years we’re going to see the pendulum swing back the other way...There will be actions done on behalf of consumers and all sorts of things done from a regulatory space as people have decided that they just don’t like it.”

A computer security expert who was key to the development of PGP encryption and a founding member of the Cloud Security Alliance, Callas said the rapid advancements in technology have made it possible for him to have a good camera in his pocket and be able to request a car to take him someplace even when in an unfamiliar location. However, all these features in the name of efficiency come with trade-offs, such as collecting location data making it possible for someone to plot on a map the exact route a person took to get somewhere.

“I love living in the future. I think it is marvelous,” Callas said, noting that privacy doesn't mean rejecting tech's benefits.

Privacy Victories

Regulators are noticing and taking steps to rein in some of the rampant data collection by companies. Europe’s General Data Protection Regulation and California’s recently enacted Consumer Privacy Act are all positive developments. GDPR requires all companies that collect personal data about European Union citizens to be transparent about why the data is being collected, to delete the data upon user request, and to disclose a data breach within 72 hours. Companies in the United States may not be wild about GDPR because it forces them to prioritize the protection of user information, but the smart companies are rolling out GDPR for all users, not just the European ones. This spillover benefit is a good thing.

“Number one on the list of where we’re getting things right is GDPR,” Callas said. “It’s certainly not perfect, but what it’s making us do in terms of looking at user privacy in a more rigorous way will help us advance.”

One of the reasons it took so long to get privacy features baked into technology was because of of the perception that people didn't care about security. The increased scrutiny over corporate data collection and discussion of privacy disasters are fueling people's demands for better privacy, Callas said. This has shifted some companies to see privacy as a market opportunity and enabling privacy features by default as competitive advantage. Apple touts itself as being more privacy-conscious than the competition. Laptops encrypt user data on the disk drives by default. TLS Everywhere is a reality, as it is possible for users to spend their day online without encountering any pages not on HTTPS. Google is policing apps on Google Play for any privacy missteps.

"People do care [about privacy]," Callas said.

Redefining Privacy

Privacy is hard to define precisely, but the prevailing definition focuses on the right to be left alone and to be unobserved. “When you want to do something in private, you want to be able to close the door and do it,” Callas said. The idea of a “reasonable expectation of privacy” is the driving principle in privacy, but changes in how people interact with technology have eroded the perception of what reasonable expectations look like.

Back in the late 1800s, it would have been considered “beyond the pale” to subpoena a person’s diary—considered to be “innermost thoughts, feelings, conversation with ourselves”—for a legal proceeding, but now that's considered acceptable. Other changes, such as the idea that people have less expectation of privacy in their car than they do in their home, is still evolving.

“I know if I walk down the street for a quarter mile that I will be photographed three times," Callas said. "We need to rethink this."

Callas is not encouraging privacy nihilism—”You think that is reasonable? Gosh, you are naive.” There is a balance between embracing technology and wanting some restraint over what companies are allowed to do. Unforunately, "surveillance capitalism," where the company's business model depends on collecting as much data as possible and selling to as many buyers as possible, is a problem.

Smart-TV maker Vizio was att least being honest when the CTO said on The Verge's podcast that the TVs would cost more if the company didn't monetize the TVs by collecting user data, selling them to advertisers, and offering direct-to-consumer entertainment. Callas wanted to know how much more the TVs would cost to get a privacy-focused model.

"I have my wallet out," he said.

Data collection by the government becomes mass surveillance, and Callas was adamant that there needed to be curbs on government "arrogance and overreach." A recently passed law in Australia authorizes law enforcement to force technology companies to create backdoors—a deliberately added security vulnerability—into their products. India is fighting Facebook over creating a backdoor in encrypted messaging app WhatsApp, and China is interested in similar rules. There is a concern that a court decision in India could affect the privacy rights of users in the United States.

“The idea of surveillance backdoors is not going to be something that’s limited to this little club of good democracies are doing and not the others,” Callas said. “Countries are starting to say, ‘Hey if others are doing that, we want in, too.’”

“The future of privacy is neither pretty good nor futile,” Callas said. There's plenty to worry about, and enough to look forward to.

]]>
<![CDATA[Decipher Podcast: Yonathan Klijnsma]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/decipher-podcast-yonathan-klijnsma https://duo.com/decipher/decipher-podcast-yonathan-klijnsma Fri, 01 Mar 2019 00:00:00 -0500

Dennis Fisher talks with Yonathan Klijnsma of RiskIQ about his new research into Magecart Group 4, the background and tactics of the web skimming group, and how the defender community responds through takedowns and other techniques.

]]>
<![CDATA[Huawei and the 5G Conundrum]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/huawei-and-the-5g-conundrum https://duo.com/decipher/huawei-and-the-5g-conundrum Wed, 27 Feb 2019 00:00:00 -0500

Telecommunications providers around the world are gearing up for 5G, the latest generation of cellular mobile communications networks that will allow for near-instantaneous connectivity, provide higher data rates, and accommodate new applications and technologies. Citing national security concerns, United States government has been lobbying governments around the world to build out 5G networks without using networking equipment from Chinese telecommunications behemoth Huawei Technologies.

The 5G stakes are high. With high data rates and low latency necessary to support applications such as virtual reality and augmented reality, and new cloud and virtualization technologies such as software-defined networking and network functions virtualization, 5G is expected to transform modern communications. Whoever’s equipment gets used to upgrade the communications infrastructure will influence—and control—how and where the world’s data travels across networks. For more than a year, various U.S. officials have lobbed accusations that Huawei was an untrusted supplier and providers using Huawei equipment risked giving the Chinese government the ability to control parts of the world’s communications networks. Huawei is arguably the largest telecommunications equipment company in the world, and makes gear for practically every step in the network—the switches, gateways, routers, and bridges—that connect user devices to data centers hosting applications and content.

U.S. officials have cited concerns—but have not publicly shown evidence—that Huawei’s networking gear contained backdoors giving the Chinese government access to how and where the world’s data gets routed. They also noted that under China’s National Intelligence Law, Chinese companies such as Huawei are required to cooperate with Chinese intelligence services.

Earlier in the year, FBI director Christopher Wray said there were national security risks to relying on a Chinese telecommunications company for American networks. “As Americans, we should all be concerned by the potential for any company beholden to a foreign government—especially one that doesn’t share our values—to burrow into the American telecommunications market. That kind of access could give a foreign government the capacity to maliciously modify or steal information, conduct undetected espionage, or exert pressure or control,” Wray said.

Asking for Evidence

Wray is correct that it would be a bad idea to have a government be able to have backdoors into the networks, but it’s not clear what evidence exists for claiming Huawei is “beholden” to the government to make its equipment suspect. Huawei Chairman Ken Hu asked that any evidence against the company be made public, noting the company's "record on security is clean."

"If you have proof and evidence, it should be made public, maybe not to the general public, not to Huawei. But at the very least, it should be made known to telecom operators, because it's telecom operators who are going to buy from Huawei," Hu said.

"Huawei is an independent business organisation. When it comes to cybersecurity and privacy protection, we are committed to siding with our customers … neither Huawei, nor I personally, have ever received any requests from any government to provide improper information," said Huawei founder Ren Zhengfei. Ren also said he would "definitely refuse" if the government ever asked the company for data.

Banning Has a Price

The U.S. passed a law in 2018 barring federal agencies from using Huawei and ZTE technology, and there are reports the White House is weighing an executive order to ban Chinese telecommunications gear from all U.S. networks. While Australiia has banned Huawei 5G equpment and New Zealand its limiting use, European countries have pushed back on U.S. officials.

Both the United Kingdom and Germany have said they cannot find any evidence of a spying program in Huawei equipment. Huawei has agreed to allow British security specialists to scrutinise its hardware and software at its Huawei Cyber Security Evaluation Centre, and has a similar agreement with Germany. The United Kingdom acknowledged China as a threat, but said it will work with Huawei to fix potential issues that its review of the company's source code uncovered. Ciaran Martin, the CEO of the U.K. National Cyber Security Centre, said the agency would be able to handle the challenges involved in monitoring suppliers who many not be considered trustworthy.

The technology gap is also a serious concern. Experts believe Huawei is ahead of European counterparts in terms of developing 5G equipment and many operators are relying on Huawei to build out their 5G networks. A de-facto ban would be a considerable setback for Europe’s efforts to stay competitive. Deutsche Telekom said that Europe could fall behind China and the United States by as much as two years if companies did not use Huawei equipment for its 5G deployments, according to Reuters.

It's not just Europe. U.S. carriers in rural areas are resisting the idea of an executive order banning Huawei outright since their networks are heavily dependant on Huawei equipment. The FCC previously considered witholding subsidies to companies considering using Huawei equipment in its 5G rollouts, but some reports suggest it is walking back on that plan.

“Going with an untrusted supplier like Huawei or ZTE [another Chinese telecomm] will have all sorts of ramifications for your national security,” a US State Department official said while in Brussels to speak with various European Union officials, according to a recent AFP report. A delegation of senior officials from the Departments of State, Defense, and Commerce at this week's the Mobile World Congress in Barcelona, Spain, to speak out against Huawei—one of the event’s main sponsors.

The first phase of 5G specifications is expected to completed by April to accommodate early commercial deployments. The second phase is expected by April 2020. IDC called 2019 a “seminal year” in the mobile industry. Governments have to decide quickly whether or not they are confident enough in Huawei's denials to include the equipment within critical infrastructure, or if they are going to accept the potential delays to rolling out 5G.

]]>
<![CDATA[Privacy, Policy, and the Illusion of Control]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/privacy-policy-and-the-illusion-of-control https://duo.com/decipher/privacy-policy-and-the-illusion-of-control Wed, 27 Feb 2019 00:00:00 -0500

These are strange times in Washington. Congress, which has spent decades conspicuously showing only the most passing interest in privacy, suddenly is awash in proposed privacy legislation and the calendars in both chambers are crowded with committee hearings on the topic. The unending string of breaches and data-misuse and abuse scandals, coupled with increasing consumer outrage, has apparently combined to accomplish that most difficult of tasks: convincing Congress to act.

But there’s a significant difference between knowing that something must be done and knowing what do. Right now, Congress seems to be stranded somewhere between those two mileposts, and a pair of hearings this week on Capitol Hill did not produce much evidence that is going to change soon.

The good news is that there seems to be a general sentiment in Washington that it’s time to pass a federal privacy law. The various state laws that exist now have laid the groundwork, holding companies accountable for lapses in privacy protection and loss of consumer data, and providing some expectation on the part of consumers that there will be consequences--however fleeting they may be--when these incidents occur. People have become much more conscious of and educated about the ways in which companies collect and use their data in the last few years, and expect that there will be legal and regulatory measures in place to keep those companies from going off the rails. While there is no federal privacy law at the moment, there are several bills at different stages of the legislative process right now, some of which would impose severe fines on companies for violations.

Both the House of Representatives and the Senate held hearings this week to discuss the need for a federal privacy measure, what that could look like, and what it might mean for data collectors as well as consumers. On Wednesday, the Senate Committee on Commerce, Science, and Transportation met to talk about the parameters of a federal policy framework, and members expressed an eagerness to improve the protections for consumers across the board.

“Congress need to develop a uniquely American data privacy framework. It is clear that we need a strong national data privacy law,” said committee Chairman Roger Wicker (R-Miss.).

A good portion of the Senate hearing focused on the concept of notice and consent, which involves showing people a privacy policy in some form and having them consent to whatever data collection and usage is specified in the policy. This method relies on the idea that people actually read privacy policies (they don’t) and understand the implications of the data collection and usage (they don’t). Which is why many in the privacy community have little use for notice and consent and consider it to be not much more than window dressing.

“I believe that notice and consent are no longer enough,” said Sen. Maria Cantwell (D-Wash.).

"Privacy is a broad concept that involves lots of different elements and shouldn’t be distilled down to just control.”

Notice and consent also has the effect of pushing much of the responsibility to consumers, an effect that’s magnified by the fact that many people don’t realize what they’re agreeing to when they click a box agreeing to a privacy policy or data collection. It’s simply an obstacle in their way. Woodrow Hartzog, a professor of law and computer science at Northeastern University, said in his written testimony for the Senate hearing that this model doesn’t work at any large scale.

“The problem with notice and choice models is that they create incentives for companies to both hide the risks in their data practices though manipulative design, vague abstractions, and excessive and complex words while at the same time shifting risk by engineering a system meant to expedite the transfer of rights and relinquishment of protections,” he said.

“People are gifted with a dizzying array of switches, delete buttons, and privacy settings. We are told that all is revealed in a company’s privacy policy, if only we would read it. After privacy harms, companies promise more and better controls. And if they happen again, the diagnosis is often that companies simply must have not added enough or improved dials and check boxes.”

There was plenty of discussion about what kinds of policies, controls, and incentives don’t work in protecting consumer data privacy, but not much in the way of concrete suggestions for what does work. That’s probably because both the House and Senate hearings were populated mainly by witnesses from advertising and technology industry associations and policy think tanks and neither included an actual privacy officer or practitioner. The committees would benefit from hearing from people who do this on a daily basis and have a clear sense of what actually works rather than falling back on concepts that are known to be insufficient, such as consumer control. As Hartzog pointed out, the idea of control is meaningless without other informative and protective elements to help people make proper decisions.

“The problem with thinking about privacy in terms of control is that it’s treated as though the mere gift of it is privacy in and of itself. In fact, it’s illusory,” he said. “Control ostensibly serves to give people autonomy. Privacy is a broad concept that involves lots of different elements and shouldn’t be distilled down to just control.”

Congress’s interest in meaningful federal data privacy legislation looks to be sincere, but any resulting laws will be diminished and less useful without meaningful input from privacy practitioners and advocates.

]]>
<![CDATA[Deciphering Sneakers]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/deciphering-sneakers https://duo.com/decipher/deciphering-sneakers Tue, 26 Feb 2019 00:00:00 -0500

Sneakers isn't just one of the best hacker movies of all time, it's a spiritual successor to WarGames and one of the most entertaining movies ever. Full stop. The tale of a crew of outcasts with sketchy pasts who break into companies for a living (not a very good one), Sneakers has an all-star cast, a killer script, and a terrifyingly prescient story about information and its control over our lives in the modern age. This is Deciphering Sneakers.

]]>
<![CDATA[ICANN Warns of 'Ongoing and Significant' Threat to DNS]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/icann-warns-of-ongoing-and-significant-threat-to-dns https://duo.com/decipher/icann-warns-of-ongoing-and-significant-threat-to-dns Tue, 26 Feb 2019 00:00:00 -0500

An ongoing series of attacks on parts of the Internet’s core infrastructure have both government agencies and Internet governing bodies warning that the network is facing an imminent threat.

The Internet Corporation for Assigned Names and Numbers (ICANN), which coordinates the assignment and maintenance of namespace, has followed up a recent bulletin from the U.S. Cybersecurity and Infrastructure Security Agency (CISA) with a warning of its own, saying that the campaign targeting DNS systems is a significant risk to the security and stability of the Internet. The ICANN warning cites a growing pattern of attacks using different techniques in order to hijack traffic through DNS modifications and compromises. The group advocates for the full deployment of DNSSEC as a mitigation against the attacks.

“Some of the attacks target the DNS, in which unauthorized changes to the delegation structure of domain names are made, replacing the addresses of intended servers with addresses of machines controlled by the attackers. This particular type of attack, which targets the DNS, only works when DNSSEC is not in use,” the ICANN advisory says.

“DNSSEC is a technology developed to protect against such changes by digitally 'signing' data to assure its validity. Although DNSSEC cannot solve all forms of attack against the DNS, when it is used, unauthorized modification to DNS information can be detected, and users are blocked from being misdirected.”

The DNSSEC extensions are designed to implement a layer of security on top of the DNS system by providing authentication of responses from DNS servers. DNSSEC has been deployed in a variety of large networks and all of the major top-level domains (TLDs) have been signed and linked to the DNSSEC root. The system can help defend against some attacks on the DNS system, including those that involve DNS cache poisoning, but it’s not a panacea by any means.

The ICANN warning comes several weeks after CISA detailed a series of DNS-hijacking attacks that targeted federal government agencies in the United States during the government’s recent shutdown. Those attacks involved the use of stolen, legitimate credentials to access systems with access to an agency’s DNS records, which the attackers would then modify in order to shunt traffic to servers they control. That kind of attack can have long-lasting effects on the victim organization.

“Because the attacker can set DNS record values, they can also obtain valid encryption certificates for an organization’s domain names. This allows the redirected traffic to be decrypted, exposing any user-submitted data. Since the certificate is valid for the domain, end users receive no error warnings,” the CISA warning said.

DNS hijacking is a constant on the Internet, and because of the way the attacks work, they often go unnoticed by victim organizations.

DNS hijacking is a constant on the Internet, and because of the way the attacks work, they often go unnoticed by victim organizations as well as individuals. They can be effective tactics for attackers to gain access to large volumes of traffic, and nation states have been known to employ DNS hijacking in the past. Governments often are the targets of DNS hijacking campaigns, too, and in November the Cisco Talos Group uncovered a campaign that targeted government agencies in both Lebanon and the United Arab Emirates. The campaign used a custom piece of malware called DNSpionage that allowed the attackers to communicate covertly with compromised machines.

“Our investigation discovered two events: the DNSpionage malware and a DNS redirection campaign. In the case of the malware campaign, we don't know the exact target, but we do know the attackers went after users in Lebanon and the UAE,” the Talos analysis says.

“It is clear that this threat actor was able to redirect DNS from government-owned domains in two different countries over the course of two months, as well as a national Lebanese airline.”

ICANN plans to hold an open session during its meeting next month in Japan to discuss the threats to the DNS system. The group said that while DNSSEC doesn’t address all of the threats, it can help protect against some of the more prevalent attacks on DNS.

“Although this will not solve the security problems of the Internet, it aims to assure that Internet users reach their desired online destination by helping to prevent so-called ‘man in the middle’ attacks where a user is unknowingly re-directed to a potentially malicious site,” ICANN said in its statement.

]]>
<![CDATA[A Traveler's Guide to OPSEC]]> info@decipher.sc (Dave Lewis) https://duo.com/decipher/a-traveler-s-guide-to-opsec https://duo.com/decipher/a-traveler-s-guide-to-opsec Mon, 25 Feb 2019 00:00:00 -0500

I’m very fortunate to have a career that I love, and one of the things that I enjoy about my career trajectory is that it has afforded me the ability to travel and see the world. As the RSA Security Conference in San Francisco approaches, many security folks find their inboxes filling up with emails asking for yet another five minutes of time. The calendars overflow with a deluge of meetings.

But the part that really stands out for me every year isn’t the meeting hopscotch, it is watching conference attendees wandering around the Moscone Center and Union Square with their badges and lanyards hanging around their necks. From an operational security, or OPSEC, point of view this is an unfortunate situation.

Travelers always need to be cognizant of their surroundings when they venture out. Criminals are not shy about seizing an opportunity to make a quick buck or find an easy victim, and providing them with free information is never a good idea. When traveling there are some steps that should be taken to ensure some level of safety. The most basic rule is to always be aware of your surroundings. But this brings up the question of what precautions non-insanely paranoid but cautious people should take when they travel, especially internationally?

Here is a compilation of steps that I would recommend people utilize when they travel to maintain their peace of mind.

KISS

A good rule of thumb is to keep it simple when you travel. Don’t need it? Don’t take it with you. A friend of mine would carry a backpack with him on travels that was overflowing with technical gear of all shapes and sizes. There was rarely a need for all of the gear, but we are creatures of habit. All of that unnecessary equipment can create targets of opportunity for a criminal when you set your backpack down in a coffee shop or start rummaging through it in the airport.

Patch All The Things!

When you travel be sure to have your mobile devices, laptops and other devices patched to the current level before you leave your house. Having the most recent version of the OS for all of your devices gives you the best level protection against opportunistic attackers, especially on unfamiliar networks. And don’t ignore your apps, either. Vulnerabilities in older versions of mobile apps can be soft targets for attackers.

Border Security

Spot checks by border security services in numerous countries around the world are becoming more common and it’s not outside the realm of possibility that your device may be seized for inspection. If that happens, don’t fight or cause a ruckus as this will only cause you further inconvenience and could result in a delay and a more in-depth search of your equipment. The law on border searches of mobile devices is still evolving in the United States and other countries and it can be difficult to know what your rights are at any given border checkpoint.

Power Down

When you’re going through an airport security checkpoint take the step to power down your devices before you get there. For many laptops and phones, completely powering down the device engages the full-disk encryption, providing a substantial layer of protection against random searches and opportunistic attackers. Just putting a device into sleep mode usually isn’t sufficient, so it’s worth the time to take the extra step of completely powering the device down.

"Keep Your Head Up, Stick On The Ice"

OK, great, now you have made it through you security screening and found your way to the airport lounge. You’ve managed to drop your bags and pick up a cup of coffee. We do tend to be overly trusting, which can open us up to attacks. If I had a dollar for every time I had encountered a laptop that was logged in and left unattended in an airport lounge I’d be able to afford a nice vacation someplace warm. As I mentioned earlier, be aware of your surroundings and never leave any of your devices unlocked and unattended in public. There’s no need to make criminals’ lives easier.

“Is this line secure?"

If you feel compelled to connect to the wifi in a public place, be sure to use a VPN. If you don’t have a corporate VPN solution, you can set up your own personal account. Before you login to your various accounts, make sure that you’ve enabled two factor authentication wherever possible. Solutions such as Duo Security (https://duo.com/) cough or Yubico’s YubiKey U2F hardware can help greatly in this regard.

When in doubt, don’t hit send.

“But wait, there’s more!"

The announcement crackles letting you know it’s time to board your flight. As you slide into your seat, remember that you still need to be aware of your surroundings. Seats on many airlines have a USB charging port available, which may seem attractive, but the problem here is that attackers can install devices on those chargers that grab data from your phone as it charges. As a smart traveller, take the extra step of getting a USB sync stop to avoid having your data copied.

“What are you looking at?"

In addition to to protecting against surreptitious data exfiltration, travelers also need to be aware of prying eyes. When you’re sitting on a plane and you open up your laptop there are multiple parties in the immediate vicinity that could see your screen. A wise choice would be to invest in a privacy screen such as those made by 3M to keep people from seeing what’s on your screen. I was once on a trip from Washington, D.C., to Toronto and I sat across the aisle from a prominent news editor and could see everything on his screen. After we cleared customs in Toronto I caught up to him and explained the situation. He was aghast at first but came to realize what I was trying to tell him. He thanked me and bought a privacy screen online while I turned to head for my cab.

“Checking In vs Check ins”

When traveling it’s wise to exercise some caution when you’re sharing with the rest of the world on social media. For example, sharing a picture of your boarding pass or itinerary can alert thieves and other troublemakers that you’re not going to be home for a while. That information, combined with other data readily available online for many people--such as home address--can provide a blueprint to a home break-in. When in doubt, don’t hit send.

While the aforementioned advice isn’t exhaustive, it will certainly help you stay safe when you travel. Keep your head on a swivel and your mobile devices close. You don’t want to present yourself as a target of opportunity for a bad guy.

Think, think and think again.

Dave Lewis is a global advisory CISO at Duo Security.

]]>
<![CDATA[BIND 9 Contains Serious Memory Leak]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/bind-9-contains-serious-memory-leak https://duo.com/decipher/bind-9-contains-serious-memory-leak Fri, 22 Feb 2019 00:00:00 -0500

Several current versions of the BIND open-source DNS software contain a serious memory leak that an attacker could use to knock a vulnerable server offline.

The vulnerability is in BIND 9 and the Internet Software Consortium, which maintains BIND, has released updates for all of the affected versions. The bug lies in the way that BIND processes some specific messages and handles memory allocation during that operation. An attacker could exploit the vulnerability by sending a specially crafted packet to a vulnerable server, which would trigger the memory leak.

“By exploiting this condition, an attacker can potentially cause named's memory use to grow without bounds until all memory available to the process is exhausted. Typically a server process is limited as to the amount of memory it can use but if the named process is not limited by the operating system all free memory on the server could be exhausted,” the BIND advisory says.

This vulnerability affects versions 9.10.7 through 9.10.8-P1, 9.11.3 through 9.11.5-P1, 9.12.0 through 9.12.3-P1, and 9.10.7-S1 through 9.11.5-S3 of the Supported Preview Edition.

The ISC also is warning about two other vulnerabilities present in various versions of BIND 9. Neither one is as serious as the memory leak in named, but both can be exploited remotely. The first flaw is in the managed-keys feature of BIND, which allows a BIND DNS resolver to maintain the keys that trust anchors use as part of their DNSSEC validation.

“Due to an error in the managed-keys feature it is possible for a BIND server which uses managed-keys to exit due to an assertion failure if, during key rollover, a trust anchor's keys are replaced with keys which use an unsupported algorithm,” the BIND advisory says.

The good news for BIND operators, though, is that it’s not very likely that an attacker would be able to get to this flaw.

“This particular vulnerability would be very difficult for an arbitrary attacker to use because it requires an operator to have BIND configured to use a trust anchor managed by the attacker. However, if successfully exercised, the defect will cause named to deliberately exit after encountering an assertion failure. It is more likely, perhaps, that this bug could be encountered accidentally, as not all versions of BIND support the same set of cryptographic algorithms,” the advisory says.

“Specifically, recent branches of BIND have begun deliberately removing support for cryptographic algorithms that are now deprecated (for example because they are no longer considered sufficiently secure.) This vulnerability could be encountered if a resolver running a version of BIND which has removed support for deprecated algorithms is configured to use a trust anchor which elects to change algorithm types to one of those deprecated algorithms.”

The other vulnerability that ISC has patched this week is an issue with the way that BIND handles some zone transfers. A zone transfer is a method for copying a DNS database across a set of servers. The bug arises because some of the controls that BIND has in place to deal with some zone transfers aren’t effective.

“A client exercising this defect can request and receive a zone transfer of a DLZ even when not permitted to do so by the allow-transfer ACL,” BIND’s advisory says.

ISC has released updated versions to fix each of these vulnerabilities.

]]>